Sample records for simple linear mixing

  1. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Investigating the linearity assumption between lumber grade mix and yield using design of experiments (DOE)

    Treesearch

    Xiaoqiu Zuo; Urs Buehlmann; R. Edward Thomas

    2004-01-01

    Solving the least-cost lumber grade mix problem allows dimension mills to minimize the cost of dimension part production. This problem, due to its economic importance, has attracted much attention from researchers and industry in the past. Most solutions used linear programming models and assumed that a simple linear relationship existed between lumber grade mix and...

  3. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  4. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  5. High linearity current communicating passive mixer employing a simple resistor bias

    NASA Astrophysics Data System (ADS)

    Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan

    2013-03-01

    A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.

  6. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    PubMed

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  7. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification.

    PubMed

    Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B

    2015-06-01

    The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  8. A unified view of convective transports by stratocumulus clouds, shallow cumulus clouds, and deep convection

    NASA Technical Reports Server (NTRS)

    Randall, David A.

    1990-01-01

    A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.

  9. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE PAGES

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...

    2015-11-12

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  10. A simple, stable, and accurate linear tetrahedral finite element for transient, nearly, and fully incompressible solid dynamics: A dynamic variational multiscale approach [A simple, stable, and accurate tetrahedral finite element for transient, nearly incompressible, linear and nonlinear elasticity: A dynamic variational multiscale approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi

    Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less

  11. Analysis of lithology: Vegetation mixes in multispectral images

    NASA Technical Reports Server (NTRS)

    Adams, J. B.; Smith, M.; Adams, J. D.

    1982-01-01

    Discrimination and identification of lithologies from multispectral images is discussed. Rock/soil identification can be facilitated by removing the component of the signal in the images that is contributed by the vegetation. Mixing models were developed to predict the spectra of combinations of pure end members, and those models were refined using laboratory measurements of real mixtures. Models in use include a simple linear (checkerboard) mix, granular mixing, semi-transparent coatings, and combinations of the above. The use of interactive computer techniques that allow quick comparison of the spectrum of a pixel stack (in a multiband set) with laboratory spectra is discussed.

  12. Light Scattering Study of Mixed Micelles Made from Elastin-Like Polypeptide Linear Chains and Trimers

    NASA Astrophysics Data System (ADS)

    Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril

    Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.

  13. Genetic mixed linear models for twin survival data.

    PubMed

    Ha, Il Do; Lee, Youngjo; Pawitan, Yudi

    2007-07-01

    Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.

  14. Determination of stress intensity factors for interface cracks under mixed-mode loading

    NASA Technical Reports Server (NTRS)

    Naik, Rajiv A.; Crews, John H., Jr.

    1992-01-01

    A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.

  15. Highly accurate symplectic element based on two variational principles

    NASA Astrophysics Data System (ADS)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  16. Prediction of free turbulent mixing using a turbulent kinetic energy method

    NASA Technical Reports Server (NTRS)

    Harsha, P. T.

    1973-01-01

    Free turbulent mixing of two-dimensional and axisymmetric one- and two-stream flows is analyzed by a relatively simple turbulent kinetic energy method. This method incorporates a linear relationship between the turbulent shear and the turbulent kinetic energy and an algebraic relationship for the length scale appearing in the turbulent kinetic energy equation. Good results are obtained for a wide variety of flows. The technique is shown to be especially applicable to flows with heat and mass transfer, for which nonunity Prandtl and Schmidt numbers may be assumed.

  17. Convection with a simple chemically reactive passive scalar

    NASA Astrophysics Data System (ADS)

    Herring, J. R.; Wyngaard, J. C.

    Convection between horizontal stress-free perfectly conducting plates is examined in the turbulent regime for air. Results are presented for an additional scalar undergoing simple linear decay. We discuss qualitative aspects of the flow in terms of spectral and three-dimensional contour maps of the velocity and scalar fields. The horizontal mean profiles of scalar gradients and fluxes agree rather well with simple mixing-length concepts. Further, the mean profiles for a range of the destruction-rate parameter are shown to be nearly completely characterized by the boundary fluxes. Finally, we shall use the present numerical data as a basis for exploring a generalization of eddy-diffusion concepts so as to properly incorporate non-local effects.

  18. Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2015-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.

  19. Robust outer synchronization between two nonlinear complex networks with parametric disturbances and mixed time-varying delays

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan; Wang, Xingyuan; Luo, Chao; Li, Junqiu; Wang, Chunpeng

    2018-03-01

    In this paper, we focus on the robust outer synchronization problem between two nonlinear complex networks with parametric disturbances and mixed time-varying delays. Firstly, a general complex network model is proposed. Besides the nonlinear couplings, the network model in this paper can possess parametric disturbances, internal time-varying delay, discrete time-varying delay and distributed time-varying delay. Then, according to the robust control strategy, linear matrix inequality and Lyapunov stability theory, several outer synchronization protocols are strictly derived. Simple linear matrix controllers are designed to driver the response network synchronize to the drive network. Additionally, our results can be applied on the complex networks without parametric disturbances. Finally, by utilizing the delayed Lorenz chaotic system as the dynamics of all nodes, simulation examples are given to demonstrate the effectiveness of our theoretical results.

  20. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  1. Design optimization of single mixed refrigerant LNG process using a hybrid modified coordinate descent algorithm

    NASA Astrophysics Data System (ADS)

    Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong

    2018-01-01

    Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.

  2. A Partially-Stirred Batch Reactor Model for Under-Ventilated Fire Dynamics

    NASA Astrophysics Data System (ADS)

    McDermott, Randall; Weinschenk, Craig

    2013-11-01

    A simple discrete quadrature method is developed for closure of the mean chemical source term in large-eddy simulations (LES) and implemented in the publicly available fire model, Fire Dynamics Simulator (FDS). The method is cast as a partially-stirred batch reactor model for each computational cell. The model has three distinct components: (1) a subgrid mixing environment, (2) a mixing model, and (3) a set of chemical rate laws. The subgrid probability density function (PDF) is described by a linear combination of Dirac delta functions with quadrature weights set to satisfy simple integral constraints for the computational cell. It is shown that under certain limiting assumptions, the present method reduces to the eddy dissipation concept (EDC). The model is used to predict carbon monoxide concentrations in direct numerical simulation (DNS) of a methane slot burner and in LES of an under-ventilated compartment fire.

  3. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  4. Ultrafast Single-Shot Optical Oscilloscope based on Time-to-Space Conversion due to Temporal and Spatial Walk-Off Effects in Nonlinear Mixing Crystal

    NASA Astrophysics Data System (ADS)

    Takagi, Yoshihiro; Yamada, Yoshifumi; Ishikawa, Kiyoshi; Shimizu, Seiji; Sakabe, Shuji

    2005-09-01

    A simple method for single-shot sub-picosecond optical pulse diagnostics has been demonstrated by imaging the time evolution of the optical mixing onto the beam cross section of the sum-frequency wave when the interrogating pulse passes over the tested pulse in the mixing crystal as a result of the combined effect of group-velocity difference and walk-off beam propagation. A high linearity of the time-to-space projection is deduced from the process solely dependent upon the spatial uniformity of the refractive indices. A snap profile of the accidental coincidence between asynchronous pulses from separate mode-locked lasers has been detected, which demonstrates the single-shot ability.

  5. Finite-time mixed outer synchronization of complex networks with coupling time-varying delay.

    PubMed

    He, Ping; Ma, Shu-Hua; Fan, Tao

    2012-12-01

    This article is concerned with the problem of finite-time mixed outer synchronization (FMOS) of complex networks with coupling time-varying delay. FMOS is a recently developed generalized synchronization concept, i.e., in which different state variables of the corresponding nodes can evolve into finite-time complete synchronization, finite-time anti-synchronization, and even amplitude finite-time death simultaneously for an appropriate choice of the controller gain matrix. Some novel stability criteria for the synchronization between drive and response complex networks with coupling time-varying delay are derived using the Lyapunov stability theory and linear matrix inequalities. And a simple linear state feedback synchronization controller is designed as a result. Numerical simulations for two coupled networks of modified Chua's circuits are then provided to demonstrate the effectiveness and feasibility of the proposed complex networks control and synchronization schemes and then compared with the proposed results and the previous schemes for accuracy.

  6. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  7. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  8. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  9. Zee-Babu type model with U (1 )Lμ-Lτ gauge symmetry

    NASA Astrophysics Data System (ADS)

    Nomura, Takaaki; Okada, Hiroshi

    2018-05-01

    We extend the Zee-Babu model, introducing local U (1 )Lμ-Lτ symmetry with several singly charged bosons. We find a predictive neutrino mass texture in a simple hypothesis in which mixings among singly charged bosons are negligible. Also, lepton-flavor violations are less constrained compared with the original model. Then, we explore the testability of the model, focusing on doubly charged boson physics at the LHC and the International Linear Collider.

  10. A random distribution reacting mixing layer model

    NASA Technical Reports Server (NTRS)

    Jones, Richard A.; Marek, C. John; Myrabo, Leik N.; Nagamatsu, Henry T.

    1994-01-01

    A methodology for simulation of molecular mixing, and the resulting velocity and temperature fields has been developed. The ideas are applied to the flow conditions present in the NASA Lewis Research Center Planar Reacting Shear Layer (PRSL) facility, and results compared to experimental data. A gaussian transverse turbulent velocity distribution is used in conjunction with a linearly increasing time scale to describe the mixing of different regions of the flow. Equilibrium reaction calculations are then performed on the mix to arrive at a new species composition and temperature. Velocities are determined through summation of momentum contributions. The analysis indicates a combustion efficiency of the order of 80 percent for the reacting mixing layer, and a turbulent Schmidt number of 2/3. The success of the model is attributed to the simulation of large-scale transport of fluid. The favorable comparison shows that a relatively quick and simple PC calculation is capable of simulating the basic flow structure in the reacting and nonreacting shear layer present in the facility given basic assumptions about turbulence properties.

  11. On the repeated measures designs and sample sizes for randomized controlled trials.

    PubMed

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Diagnostic tools for mixing models of stream water chemistry

    USGS Publications Warehouse

    Hooper, Richard P.

    2003-01-01

    Mixing models provide a useful null hypothesis against which to evaluate processes controlling stream water chemical data. Because conservative mixing of end‐members with constant concentration is a linear process, a number of simple mathematical and multivariate statistical methods can be applied to this problem. Although mixing models have been most typically used in the context of mixing soil and groundwater end‐members, an extension of the mathematics of mixing models is presented that assesses the “fit” of a multivariate data set to a lower dimensional mixing subspace without the need for explicitly identified end‐members. Diagnostic tools are developed to determine the approximate rank of the data set and to assess lack of fit of the data. This permits identification of processes that violate the assumptions of the mixing model and can suggest the dominant processes controlling stream water chemical variation. These same diagnostic tools can be used to assess the fit of the chemistry of one site into the mixing subspace of a different site, thereby permitting an assessment of the consistency of controlling end‐members across sites. This technique is applied to a number of sites at the Panola Mountain Research Watershed located near Atlanta, Georgia.

  13. A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test

    NASA Technical Reports Server (NTRS)

    Reeder, James R.

    2002-01-01

    The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.

  14. Simplified, inverse, ejector design tool

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1993-01-01

    A simple lumped parameter based inverse design tool has been developed which provides flow path geometry and entrainment estimates subject to operational, acoustic, and design constraints. These constraints are manifested through specification of primary mass flow rate or ejector thrust, fully-mixed exit velocity, and static pressure matching. Fundamentally, integral forms of the conservation equations coupled with the specified design constraints are combined to yield an easily invertible linear system in terms of the flow path cross-sectional areas. Entrainment is computed by back substitution. Initial comparison with experimental and analogous one-dimensional methods show good agreement. Thus, this simple inverse design code provides an analytically based, preliminary design tool with direct application to High Speed Civil Transport (HSCT) design studies.

  15. Photon-Z mixing the Weinberg-Salam model: Effective charges and the a = -3 gauge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baulieu, L.; Coquereaux, R.

    1982-04-15

    We study some properties of the Weinberg-Salam model connected with the photon-Z mixing. We solve the linear Dyson-Schwinger equations between full and 1PI boson propagators. The task is made easier, by the two-point function Ward identities that we derive to all orders and in any gauge. Some aspects of the renormalization of the model are also discussed. We display the exact mass-dependent one-loop two-point functions involving the photon and Z field in any linear xi-gauge. The special gauge a = xi/sup -1/ = -3 is shown to play a peculiar role. In this gauge, the Z field is multiplicatively renormalizablemore » (at the one-loop level), and one can construct both electric and weak effective charges of the theory from the photon and Z propagators, with a very simple expression similar to that of the QED Petermann, Stueckelberg, Gell-Mann and Low charge.« less

  16. Mixing and evaporation processes in an inverse estuary inferred from δ2H and δ18O

    NASA Astrophysics Data System (ADS)

    Corlis, Nicholas J.; Herbert Veeh, H.; Dighton, John C.; Herczeg, Andrew L.

    2003-05-01

    We have measured δ2H and δ18O in Spencer Gulf, South Australia, an inverse estuary with a salinity gradient from 36‰ near its entrance to about 45‰ at its head. We show that a simple evaporation model of seawater under ambient conditions, aided by its long residence time in Spencer Gulf, can account for the major features of the non-linear distribution pattern of δ2H with respect to salinity, at least in the restricted part of the gulf. In the more exposed part of the gulf, the δ/ S pattern appears to be governed primarily by mixing processes between inflowing shelf water and outflowing high salinity gulf water. These data provide direct support for the oceanographic model of Spencer Gulf previously proposed by other workers. Although the observed δ/ S relationship here is non-linear and hence in notable contrast to the linear δ/ S relationship in the Red Sea, the slopes of δ2H vs. δ18O are comparable, indicating that the isotopic enrichments in both marginal seas are governed by similar climatic conditions with evaporation exceeding precipitation.

  17. [Cost variation in care groups?

    PubMed

    Mohnen, S M; Molema, C C M; Steenbeek, W; van den Berg, M J; de Bruin, S R; Baan, C A; Struijs, J N

    2017-01-01

    Is the simple mean of the costs per diabetes patient a suitable tool with which to compare care groups? Do the total costs of care per diabetes patient really give the best insight into care group performance? Cross-sectional, multi-level study. The 2009 insurance claims of 104,544 diabetes patients managed by care groups in the Netherlands were analysed. The data were obtained from Vektis care information centre. For each care group we determined the mean costs per patient of all the curative care and diabetes-specific hospital care using the simple mean method, then repeated it using the 'generalized linear mixed model'. We also calculated for which proportion the differences found could be attributed to the care groups themselves. The mean costs of the total curative care per patient were €3,092 - €6,546; there were no significant differences between care groups. The mixed model method resulted in less variation (€2,884 - €3,511), and there were a few significant differences. We found a similar result for diabetes-specific hospital care and the ranking position of the care groups proved to be dependent on the method used. The care group effect was limited, although it was greater in the diabetes-specific hospital costs than in the total costs of curative care (6.7% vs. 0.4%). The method used to benchmark care groups carries considerable weight. Simply stated, determining the mean costs of care (still often done) leads to an overestimation of the differences between care groups. The generalized linear mixed model is more accurate and yields better comparisons. However, the fact remains that 'total costs of care' is a faulty indicator since care groups have little impact on them. A more informative indicator is 'costs of diabetes-specific hospital care' as these costs are more influenced by care groups.

  18. Refractometry for quality control of anesthetic drug mixtures.

    PubMed

    Stabenow, Jennifer M; Maske, Mindy L; Vogler, George A

    2006-07-01

    Injectable anesthetic drugs used in rodents are often mixed and further diluted to increase the convenience and accuracy of dosing. We evaluated clinical refractometry as a simple and rapid method of quality control and mixing error detection of rodent anesthetic or analgesic mixtures. Dilutions of ketamine, xylazine, acepromazine, and buprenorphine were prepared with reagent-grade water to produce at least 4 concentration levels. The refraction of each concentration then was measured with a clinical refractometer and plotted against the percentage of stock concentration. The resulting graphs were linear and could be used to determine the concentration of single-drug dilutions or to predict the refraction of drug mixtures. We conclude that refractometry can be used to assess the concentration of dilutions of single drugs and can verify the mixing accuracy of drug combinations when the components of the mixture are known and fall within the detection range of the instrument.

  19. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    PubMed

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p < 0.001) when using a linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p < 0.001) and slopes (p < 0.001) of the individual growth trajectories. We also identified important serial correlation within the structure of the data (ρ = 0.66; 95 % CI 0.64 to 0.68; p < 0.001), which we modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19,598, respectively). While the regression parameters are more complex to interpret in the former, we argue that inference for any problem depends more on the estimated curve or differences in curves rather than the coefficients. Moreover, use of cubic regression splines provides biological meaningful growth velocity and acceleration curves despite increased complexity in coefficient interpretation. Through this stepwise approach, we provide a set of tools to model longitudinal childhood data for non-statisticians using linear mixed-effect models.

  20. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  1. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    PubMed

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  2. Numerical solution of a non-linear conservation law applicable to the interior dynamics of partially molten planets

    NASA Astrophysics Data System (ADS)

    Bower, Dan J.; Sanan, Patrick; Wolf, Aaron S.

    2018-01-01

    The energy balance of a partially molten rocky planet can be expressed as a non-linear diffusion equation using mixing length theory to quantify heat transport by both convection and mixing of the melt and solid phases. Crucially, in this formulation the effective or eddy diffusivity depends on the entropy gradient, ∂S / ∂r , as well as entropy itself. First we present a simplified model with semi-analytical solutions that highlights the large dynamic range of ∂S / ∂r -around 12 orders of magnitude-for physically-relevant parameters. It also elucidates the thermal structure of a magma ocean during the earliest stage of crystal formation. This motivates the development of a simple yet stable numerical scheme able to capture the large dynamic range of ∂S / ∂r and hence provide a flexible and robust method for time-integrating the energy equation. Using insight gained from the simplified model, we consider a full model, which includes energy fluxes associated with convection, mixing, gravitational separation, and conduction that all depend on the thermophysical properties of the melt and solid phases. This model is discretised and evolved by applying the finite volume method (FVM), allowing for extended precision calculations and using ∂S / ∂r as the solution variable. The FVM is well-suited to this problem since it is naturally energy conserving, flexible, and intuitive to incorporate arbitrary non-linear fluxes that rely on lookup data. Special attention is given to the numerically challenging scenario in which crystals first form in the centre of a magma ocean. The computational framework we devise is immediately applicable to modelling high melt fraction phenomena in Earth and planetary science research. Furthermore, it provides a template for solving similar non-linear diffusion equations that arise in other science and engineering disciplines, particularly for non-linear functional forms of the diffusion coefficient.

  3. A simple approach to quantitative analysis using three-dimensional spectra based on selected Zernike moments.

    PubMed

    Zhai, Hong Lin; Zhai, Yue Yuan; Li, Pei Zhen; Tian, Yue Li

    2013-01-21

    A very simple approach to quantitative analysis is proposed based on the technology of digital image processing using three-dimensional (3D) spectra obtained by high-performance liquid chromatography coupled with a diode array detector (HPLC-DAD). As the region-based shape features of a grayscale image, Zernike moments with inherently invariance property were employed to establish the linear quantitative models. This approach was applied to the quantitative analysis of three compounds in mixed samples using 3D HPLC-DAD spectra, and three linear models were obtained, respectively. The correlation coefficients (R(2)) for training and test sets were more than 0.999, and the statistical parameters and strict validation supported the reliability of established models. The analytical results suggest that the Zernike moment selected by stepwise regression can be used in the quantitative analysis of target compounds. Our study provides a new idea for quantitative analysis using 3D spectra, which can be extended to the analysis of other 3D spectra obtained by different methods or instruments.

  4. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    PubMed

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  5. Raman structural study of melt-mixed blends of isotactic polypropylene with polyethylene of various densities

    NASA Astrophysics Data System (ADS)

    Prokhorov, K. A.; Nikolaeva, G. Yu; Sagitova, E. A.; Pashinin, P. P.; Guseva, M. A.; Shklyaruk, B. F.; Gerasin, V. A.

    2018-04-01

    We report a Raman structural study of melt-mixed blends of isotactic polypropylene with two grades of polyethylene: linear high-density and branched low-density polyethylenes. Raman methods, which had been suggested for the analysis of neat polyethylene and isotactic polypropylene, were modified in this study for quantitative analysis of polyethylene/polypropylene blends. We revealed the dependence of the degree of crystallinity and conformational composition of macromolecules in the blends on relative content of the blend components and preparation conditions (quenching or annealing). We suggested a simple Raman method for evaluation of the relative content of the components in polyethylene/polypropylene blends. The degree of crystallinity of our samples, evaluated by Raman spectroscopy, is in good agreement with the results of analysis by differential scanning calorimetry.

  6. Integrated pillar scatterers for speeding up classification of cell holograms.

    PubMed

    Lugnan, Alessio; Dambre, Joni; Bienstman, Peter

    2017-11-27

    The computational power required to classify cell holograms is a major limit to the throughput of label-free cell sorting based on digital holographic microscopy. In this work, a simple integrated photonic stage comprising a collection of silica pillar scatterers is proposed as an effective nonlinear mixing interface between the light scattered by a cell and an image sensor. The light processing provided by the photonic stage allows for the use of a simple linear classifier implemented in the electric domain and applied on a limited number of pixels. A proof-of-concept of the presented machine learning technique, which is based on the extreme learning machine (ELM) paradigm, is provided by the classification results on samples generated by 2D FDTD simulations of cells in a microfluidic channel.

  7. 40 CFR 60.667 - Chemicals affected by subpart NNN.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... alcohols, ethoxylated, mixed Linear alcohols, ethoxylated, and sulfated, sodium salt, mixed Linear alcohols, sulfated, sodium salt, mixed Linear alkylbenzene 123-01-3 Magnesium acetate 142-72-3 Maleic anhydride 108...

  8. 40 CFR 60.667 - Chemicals affected by subpart NNN.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... alcohols, ethoxylated, mixed Linear alcohols, ethoxylated, and sulfated, sodium salt, mixed Linear alcohols, sulfated, sodium salt, mixed Linear alkylbenzene 123-01-3 Magnesium acetate 142-72-3 Maleic anhydride 108...

  9. Neural Population Coding of Multiple Stimuli

    PubMed Central

    Ma, Wei Ji

    2015-01-01

    In natural scenes, objects generally appear together with other objects. Yet, theoretical studies of neural population coding typically focus on the encoding of single objects in isolation. Experimental studies suggest that neural responses to multiple objects are well described by linear or nonlinear combinations of the responses to constituent objects, a phenomenon we call stimulus mixing. Here, we present a theoretical analysis of the consequences of common forms of stimulus mixing observed in cortical responses. We show that some of these mixing rules can severely compromise the brain's ability to decode the individual objects. This cost is usually greater than the cost incurred by even large reductions in the gain or large increases in neural variability, explaining why the benefits of attention can be understood primarily in terms of a stimulus selection, or demixing, mechanism rather than purely as a gain increase or noise reduction mechanism. The cost of stimulus mixing becomes even higher when the number of encoded objects increases, suggesting a novel mechanism that might contribute to set size effects observed in myriad psychophysical tasks. We further show that a specific form of neural correlation and heterogeneity in stimulus mixing among the neurons can partially alleviate the harmful effects of stimulus mixing. Finally, we derive simple conditions that must be satisfied for unharmful mixing of stimuli. PMID:25740513

  10. Solving a mixture of many random linear equations by tensor decomposition and alternating minimization.

    DOT National Transportation Integrated Search

    2016-09-01

    We consider the problem of solving mixed random linear equations with k components. This is the noiseless setting of mixed linear regression. The goal is to estimate multiple linear models from mixed samples in the case where the labels (which sample...

  11. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. A mixed-effects model approach for the statistical analysis of vocal fold viscoelastic shear properties.

    PubMed

    Xu, Chet C; Chan, Roger W; Sun, Han; Zhan, Xiaowei

    2017-11-01

    A mixed-effects model approach was introduced in this study for the statistical analysis of rheological data of vocal fold tissues, in order to account for the data correlation caused by multiple measurements of each tissue sample across the test frequency range. Such data correlation had often been overlooked in previous studies in the past decades. The viscoelastic shear properties of the vocal fold lamina propria of two commonly used laryngeal research animal species (i.e. rabbit, porcine) were measured by a linear, controlled-strain simple-shear rheometer. Along with published canine and human rheological data, the vocal fold viscoelastic shear moduli of these animal species were compared to those of human over a frequency range of 1-250Hz using the mixed-effects models. Our results indicated that tissues of the rabbit, canine and porcine vocal fold lamina propria were significantly stiffer and more viscous than those of human. Mixed-effects models were shown to be able to more accurately analyze rheological data generated from repeated measurements. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A mixed-mode crack analysis of rectilinear anisotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Wang, S. S.; Yau, J. F.; Corten, H. T.

    1980-01-01

    A very simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems in rectilinear anisotropic solids is presented. The analysis is formulated on the basis of conservation laws of anisotropic elasticity and of fundamental relationships in anisotropic fracture mechanics. The problem is reduced to a system of linear algebraic equations in mixed-mode stress intensity factors. One of the salient features of the present approach is that it can determine directly the mixed-mode stress intensity solutions from the conservation integrals evaluated along a path removed from the crack-tip region without the need of solving the corresponding complex near-field boundary value problem. Several examples with solutions available in the literature are solved to ensure the accuracy of the current analysis. This method is further demonstrated to be superior to other approaches in its numerical simplicity and computational efficiency. Solutions of more complicated and practical engineering problems dealing with the crack emanating from a circular hole in composites are presented also to illustrate the capacity of this method.

  14. Using nitrate dual isotopic composition (δ15N and δ18O) as a tool for exploring sources and cycling of nitrate in an estuarine system: Elkhorn Slough, California

    USGS Publications Warehouse

    Wankel, Scott D.; Kendall, Carol; Paytan, Adina

    2009-01-01

    Nitrate (NO-3 concentrations and dual isotopic composition (??15N and ??18O) were measured during various seasons and tidal conditions in Elkhorn Slough to evaluate mixing of sources of NO-3 within this California estuary. We found the isotopic composition of NO-3 was influenced most heavily by mixing of two primary sources with unique isotopic signatures, a marine (Monterey Bay) and terrestrial agricultural runoff source (Old Salinas River). However, our attempt to use a simple two end-member mixing model to calculate the relative contribution of these two NO-3 sources to the Slough was complicated by periods of nonconservative behavior and/or the presence of additional sources, particularly during the dry season when NO-3 concentrations were low. Although multiple linear regression generally yielded good fits to the observed data, deviations from conservative mixing were still evident. After consideration of potential alternative sources, we concluded that deviations from two end-member mixing were most likely derived from interactions with marsh sediments in regions of the Slough where high rates of NO-3 uptake and nitrification result in NO-3 with low ?? 15N and high ??18O values. A simple steady state dual isotope model is used to illustrate the impact of cycling processes in an estuarine setting which may play a primary role in controlling NO -3 isotopic composition when and where cycling rates and water residence times are high. This work expands our understanding of nitrogen and oxygen isotopes as biogeochemical tools for investigating NO -3 sources and cycling in estuaries, emphasizing the role that cycling processes may play in altering isotopic composition. Copyright 2009 by the American Geophysical Union.

  15. Geochemical modeling of magma mixing and magma reservoir volumes during early episodes of Kīlauea Volcano's Pu`u `Ō`ō eruption

    NASA Astrophysics Data System (ADS)

    Shamberger, Patrick J.; Garcia, Michael O.

    2007-02-01

    Geochemical modeling of magma mixing allows for evaluation of volumes of magma storage reservoirs and magma plumbing configurations. A new analytical expression is derived for a simple two-component box-mixing model describing the proportions of mixing components in erupted lavas as a function of time. Four versions of this model are applied to a mixing trend spanning episodes 3 31 of Kilauea Volcano’s Puu Oo eruption, each testing different constraints on magma reservoir input and output fluxes. Unknown parameters (e.g., magma reservoir influx rate, initial reservoir volume) are optimized for each model using a non-linear least squares technique to fit model trends to geochemical time-series data. The modeled mixing trend closely reproduces the observed compositional trend. The two models that match measured lava effusion rates have constant magma input and output fluxes and suggest a large pre-mixing magma reservoir (46±2 and 49±1 million m3), with little or no volume change over time. This volume is much larger than a previous estimate for the shallow, dike-shaped magma reservoir under the Puu Oo vent, which grew from ˜3 to ˜10 12 million m3. These volumetric differences are interpreted as indicating that mixing occurred first in a larger, deeper reservoir before the magma was injected into the overlying smaller reservoir.

  16. LADES: a software for constructing and analyzing longitudinal designs in biomedical research.

    PubMed

    Vázquez-Alcocer, Alan; Garzón-Cortes, Daniel Ladislao; Sánchez-Casas, Rosa María

    2014-01-01

    One of the most important steps in biomedical longitudinal studies is choosing a good experimental design that can provide high accuracy in the analysis of results with a minimum sample size. Several methods for constructing efficient longitudinal designs have been developed based on power analysis and the statistical model used for analyzing the final results. However, development of this technology is not available to practitioners through user-friendly software. In this paper we introduce LADES (Longitudinal Analysis and Design of Experiments Software) as an alternative and easy-to-use tool for conducting longitudinal analysis and constructing efficient longitudinal designs. LADES incorporates methods for creating cost-efficient longitudinal designs, unequal longitudinal designs, and simple longitudinal designs. In addition, LADES includes different methods for analyzing longitudinal data such as linear mixed models, generalized estimating equations, among others. A study of European eels is reanalyzed in order to show LADES capabilities. Three treatments contained in three aquariums with five eels each were analyzed. Data were collected from 0 up to the 12th week post treatment for all the eels (complete design). The response under evaluation is sperm volume. A linear mixed model was fitted to the results using LADES. The complete design had a power of 88.7% using 15 eels. With LADES we propose the use of an unequal design with only 14 eels and 89.5% efficiency. LADES was developed as a powerful and simple tool to promote the use of statistical methods for analyzing and creating longitudinal experiments in biomedical research.

  17. Development of a novel mixed hemimicelles dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene for the separation and preconcentration of fluoxetine in different matrices before its determination by fiber optic linear array spectrophotometry and mode-mismatched thermal lens spectroscopy.

    PubMed

    Kazemi, Elahe; Haji Shabani, Ali Mohammad; Dadfarnia, Shayessteh; Abbasi, Amir; Rashidian Vaziri, Mohammad Reza; Behjat, Abbas

    2016-01-28

    This study aims at developing a novel, sensitive, fast, simple and convenient method for separation and preconcentration of trace amounts of fluoxetine before its spectrophotometric determination. The method is based on combination of magnetic mixed hemimicelles solid phase extraction and dispersive micro solid phase extraction using 1-hexadecyl-3-methylimidazolium bromide coated magnetic graphene as a sorbent. The magnetic graphene was synthesized by a simple coprecipitation method and characterized by X-ray diffraction (XRD), Fourier transform infrared (FT-IR) spectroscopy and scanning electron microscopy (SEM). The retained analyte was eluted using a 100 μL mixture of methanol/acetic acid (9:1) and converted into fluoxetine-β-cyclodextrin inclusion complex. The analyte was then quantified by fiber optic linear array spectrophotometry as well as mode-mismatched thermal lens spectroscopy (TLS). The factors affecting the separation, preconcentration and determination of fluoxetine were investigated and optimized. With a 50 mL sample and under optimized conditions using the spectrophotometry technique, the method exhibited a linear dynamic range of 0.4-60.0 μg L(-1), a detection limit of 0.21 μg L(-1), an enrichment factor of 167, and a relative standard deviation of 2.1% and 3.8% (n = 6) at 60 μg L(-1) level of fluoxetine for intra- and inter-day analyses, respectively. However, with thermal lens spectrometry and a sample volume of 10 mL, the method exhibited a linear dynamic range of 0.05-300 μg L(-1), a detection limit of 0.016 μg L(-1) and a relative standard deviation of 3.8% and 5.6% (n = 6) at 60 μg L(-1) level of fluoxetine for intra- and inter-day analyses, respectively. The method was successfully applied to determine fluoxetine in pharmaceutical formulation, human urine and environmental water samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Quantitative genetic properties of four measures of deformity in yellowtail kingfish Seriola lalandi Valenciennes, 1833.

    PubMed

    Nguyen, N H; Whatmore, P; Miller, A; Knibb, W

    2016-02-01

    The main aim of this study was to estimate the heritability for four measures of deformity and their genetic associations with growth (body weight and length), carcass (fillet weight and yield) and flesh-quality (fillet fat content) traits in yellowtail kingfish Seriola lalandi. The observed major deformities included lower jaw, nasal erosion, deformed operculum and skinny fish on 480 individuals from 22 families at Clean Seas Tuna Ltd. They were typically recorded as binary traits (presence or absence) and were analysed separately by both threshold generalized models and standard animal mixed models. Consistency of the models was evaluated by calculating simple Pearson correlation of breeding values of full-sib families for jaw deformity. Genetic and phenotypic correlations among traits were estimated using a multitrait linear mixed model in ASReml. Both threshold and linear mixed model analysis showed that there is additive genetic variation in the four measures of deformity, with the estimates of heritability obtained from the former (threshold) models on liability scale ranging from 0.14 to 0.66 (SE 0.32-0.56) and from the latter (linear animal and sire) models on original (observed) scale, 0.01-0.23 (SE 0.03-0.16). When the estimates on the underlying liability were transformed to the observed scale (0, 1), they were generally consistent between threshold and linear mixed models. Phenotypic correlations among deformity traits were weak (close to zero). The genetic correlations among deformity traits were not significantly different from zero. Body weight and fillet carcass showed significant positive genetic correlations with jaw deformity (0.75 and 0.95, respectively). Genetic correlation between body weight and operculum was negative (-0.51, P < 0.05). The genetic correlations' estimates of body and carcass traits with other deformity were not significant due to their relatively high standard errors. Our results showed that there are prospects for genetic selection to improve deformity in yellowtail kingfish and that measures of deformity should be included in the recording scheme, breeding objectives and selection index in practical selective breeding programmes due to the antagonistic genetic correlations of deformed jaws with body and carcass performance. © 2015 John Wiley & Sons Ltd.

  19. Stochastic Mixing Model with Power Law Decay of Variance

    NASA Technical Reports Server (NTRS)

    Fedotov, S.; Ihme, M.; Pitsch, H.

    2003-01-01

    Here we present a simple stochastic mixing model based on the law of large numbers (LLN). The reason why the LLN is involved in our formulation of the mixing problem is that the random conserved scalar c = c(t,x(t)) appears to behave as a sample mean. It converges to the mean value mu, while the variance sigma(sup 2)(sub c) (t) decays approximately as t(exp -1). Since the variance of the scalar decays faster than a sample mean (typically is greater than unity), we will introduce some non-linear modifications into the corresponding pdf-equation. The main idea is to develop a robust model which is independent from restrictive assumptions about the shape of the pdf. The remainder of this paper is organized as follows. In Section 2 we derive the integral equation from a stochastic difference equation describing the evolution of the pdf of a passive scalar in time. The stochastic difference equation introduces an exchange rate gamma(sub n) which we model in a first step as a deterministic function. In a second step, we generalize gamma(sub n) as a stochastic variable taking fluctuations in the inhomogeneous environment into account. In Section 3 we solve the non-linear integral equation numerically and analyze the influence of the different parameters on the decay rate. The paper finishes with a conclusion.

  20. Phase modulation in dipolar-coupled A 2 spin systems: effect of maximum state mixing in 1H NMR in vivo

    NASA Astrophysics Data System (ADS)

    Schröder, Leif; Schmitz, Christian; Bachert, Peter

    2004-12-01

    Coupling constants of nuclear spin systems can be determined from phase modulation of multiplet resonances. Strongly coupled systems such as citrate in prostatic tissue exhibit a more complex modulation than AX connectivities, because of substantial mixing of quantum states. An extreme limit is the coupling of n isochronous spins (A n system). It is observable only for directly connected spins like the methylene protons of creatine and phosphocreatine which experience residual dipolar coupling in intact muscle tissue in vivo. We will demonstrate that phase modulation of this "pseudo-strong" system is quite simple compared to those of AB systems. Theory predicts that the spin-echo experiment yields conditions as in the case of weak interactions, in particular, the phase modulation depends linearly on the line splitting and the echo time.

  1. An overview of longitudinal data analysis methods for neurological research.

    PubMed

    Locascio, Joseph J; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.

  2. Trajectories of eGFR decline over a four year period in an Indigenous Australian population at high risk of CKD-the eGFR follow up study.

    PubMed

    Barzi, Federica; Jones, Graham R D; Hughes, Jaquelyne T; Lawton, Paul D; Hoy, Wendy; O'Dea, Kerin; Jerums, George; MacIsaac, Richard J; Cass, Alan; Maple-Brown, Louise J

    2018-03-01

    Being able to estimate kidney decline accurately is particularly important in Indigenous Australians, a population at increased risk of developing chronic kidney disease and end stage kidney disease. The aim of this analysis was to explore the trend of decline in estimated glomerular filtration rate (eGFR) over a four year period using multiple local creatinine measures, compared with estimates derived using centrally-measured enzymatic creatinine and with estimates derived using only two local measures. The eGFR study comprised a cohort of over 600 Aboriginal Australian participants recruited from over twenty sites in urban, regional and remote Australia across five strata of health, diabetes and kidney function. Trajectories of eGFR were explored on 385 participants with at least three local creatinine records using graphical methods that compared the linear trends fitted using linear mixed models with non-linear trends fitted using fractional polynomial equations. Temporal changes of local creatinine were also characterized using group-based modelling. Analyses were stratified by eGFR (<60; 60-89; 90-119 and ≥120ml/min/1.73m 2 ) and albuminuria categories (<3mg/mmol; 3-30mg/mmol; >30mg/mmol). Mean age of the participants was 48years, 64% were female and the median follow-up was 3years. Decline of eGFR was accurately estimated using simple linear regression models and locally measured creatinine was as good as centrally measured creatinine at predicting kidney decline in people with an eGFR<60 and an eGFR 60-90ml/min/1.73m 2 with albuminuria. Analyses showed that one baseline and one follow-up locally measured creatinine may be sufficient to estimate short term (up to four years) kidney function decline. The greatest yearly decline was estimated in those with eGFR 60-90 and macro-albuminuria: -6.21 (-8.20, -4.23) ml/min/1.73m 2 . Short term estimates of kidney function decline can be reliably derived using an easy to implement and simple to interpret linear mixed effect model. Locally measured creatinine did not differ to centrally measured creatinine, thus is an accurate cost-efficient and timely means to monitoring kidney function progression. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  3. Estimation of the linear mixed integrated Ornstein–Uhlenbeck model

    PubMed Central

    Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate

    2017-01-01

    ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536

  4. Spatial Characteristics of Small Green Spaces' Mitigating Effects on Microscopic Urban Heat Islands

    NASA Astrophysics Data System (ADS)

    Park, J.; Lee, D. K.; Jeong, W.; Kim, J. H.; Huh, K. Y.

    2015-12-01

    The purpose of the study is to find small greens' disposition, types and sizes to reduce air temperature effectively in urban blocks. The research sites were six high developed blocks in Seoul, Korea. Air temperature was measured with mobile loggers in clear daytime during summer, from August to September, at screen level. Also the measurement repeated over three times a day during three days by walking and circulating around the experimental blocks and the control blocks at the same time. By analyzing spatial characteristics, the averaged air temperatures were classified with three spaces, sunny spaces, building-shaded spaces and small green spaces by using Kruskal-Wallis Test; and small green spaces in 6 blocks were classified into their outward forms, polygonal or linear and single or mixed. The polygonal and mixed types of small green spaces mitigated averaged air temperature of each block which they belonged with a simple linear regression model with adjusted R2 = 0.90**. As the area and volume of these types increased, the effect of air temperature reduction (ΔT; Air temperature difference between sunny space and green space in a block) also increased in a linear relationship. The experimental range of this research is 100m2 ~ 2,000m2 of area, and 1,000m3 ~ 10,000m3 of volume of small green space. As a result, more than 300m2 and 2,300m3 of polygonal green spaces with mixed vegetation is required to lower 1°C; 650m2 and 5,000m3 of them to lower 2°C; about 2,000m2 and about 10,000m3 of them to lower 4°C air temperature reduction in an urban block.

  5. Evolutionary dynamics of general group interactions in structured populations

    NASA Astrophysics Data System (ADS)

    Li, Aming; Broom, Mark; Du, Jinming; Wang, Long

    2016-02-01

    The evolution of populations is influenced by many factors, and the simple classical models have been developed in a number of important ways. Both population structure and multiplayer interactions have been shown to significantly affect the evolution of important properties, such as the level of cooperation or of aggressive behavior. Here we combine these two key factors and develop the evolutionary dynamics of general group interactions in structured populations represented by regular graphs. The traditional linear and threshold public goods games are adopted as models to address the dynamics. We show that for linear group interactions, population structure can favor the evolution of cooperation compared to the well-mixed case, and we see that the more neighbors there are, the harder it is for cooperators to persist in structured populations. We further show that threshold group interactions could lead to the emergence of cooperation even in well-mixed populations. Here population structure sometimes inhibits cooperation for the threshold public goods game, where depending on the benefit to cost ratio, the outcomes are bistability or a monomorphic population of defectors or cooperators. Our results suggest, counterintuitively, that structured populations are not always beneficial for the evolution of cooperation for nonlinear group interactions.

  6. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  7. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  8. Does linear separability really matter? Complex visual search is explained by simple search

    PubMed Central

    Vighneshvel, T.; Arun, S. P.

    2013-01-01

    Visual search in real life involves complex displays with a target among multiple types of distracters, but in the laboratory, it is often tested using simple displays with identical distracters. Can complex search be understood in terms of simple searches? This link may not be straightforward if complex search has emergent properties. One such property is linear separability, whereby search is hard when a target cannot be separated from its distracters using a single linear boundary. However, evidence in favor of linear separability is based on testing stimulus configurations in an external parametric space that need not be related to their true perceptual representation. We therefore set out to assess whether linear separability influences complex search at all. Our null hypothesis was that complex search performance depends only on classical factors such as target-distracter similarity and distracter homogeneity, which we measured using simple searches. Across three experiments involving a variety of artificial and natural objects, differences between linearly separable and nonseparable searches were explained using target-distracter similarity and distracter heterogeneity. Further, simple searches accurately predicted complex search regardless of linear separability (r = 0.91). Our results show that complex search is explained by simple search, refuting the widely held belief that linear separability influences visual search. PMID:24029822

  9. Force-induced desorption of 3-star polymers: a self-avoiding walk model

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Whittington, S. G.

    2018-05-01

    We consider a simple cubic lattice self-avoiding walk model of 3-star polymers adsorbed at a surface and then desorbed by pulling with an externally applied force. We determine rigorously the free energy of the model in terms of properties of a self-avoiding walk, and show that the phase diagram includes four phases, namely a ballistic phase where the extension normal to the surface is linear in the length, an adsorbed phase and a mixed phase, in addition to the free phase where the model is neither adsorbed nor ballistic. In the adsorbed phase all three branches or arms of the star are adsorbed at the surface. In the ballistic phase two arms of the star are pulled into a ballistic phase, while the remaining arm is in a free phase. In the mixed phase two arms in the star are adsorbed while the third arm is ballistic. The phase boundaries separating the ballistic and mixed phases, and the adsorbed and mixed phases, are both first order phase transitions. The presence of the mixed phase is interesting because it does not occur for pulled, adsorbed self-avoiding walks. In an atomic force microscopy experiment it would appear as an additional phase transition as a function of force.

  10. A simple model to predict the biodiesel blend density as simultaneous function of blend percent and temperature.

    PubMed

    Gaonkar, Narayan; Vaidya, R G

    2016-05-01

    A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.

  11. An Overview of Longitudinal Data Analysis Methods for Neurological Research

    PubMed Central

    Locascio, Joseph J.; Atri, Alireza

    2011-01-01

    The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825

  12. Exploring compositional variations on the surface of Mars applying mixing modeling to a telescopic spectral image

    NASA Technical Reports Server (NTRS)

    Merenyi, E.; Miller, J. S.; Singer, R. B.

    1992-01-01

    The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.

  13. Application of mixed cloud point extraction for the analysis of six flavonoids in Apocynum venetum leaf samples by high performance liquid chromatography.

    PubMed

    Zhou, Jun; Sun, Jiang Bing; Xu, Xin Yu; Cheng, Zhao Hui; Zeng, Ping; Wang, Feng Qiao; Zhang, Qiong

    2015-03-25

    A simple, inexpensive and efficient method based on the mixed cloud point extraction (MCPE) combined with high performance liquid chromatography was developed for the simultaneous separation and determination of six flavonoids (rutin, hyperoside, quercetin-3-O-sophoroside, isoquercitrin, astragalin and quercetin) in Apocynum venetum leaf samples. The non-ionic surfactant Genapol X-080 and cetyl-trimethyl ammonium bromide (CTAB) was chosen as the mixed extracting solvent. Parameters that affect the MCPE processes, such as the content of Genapol X-080 and CTAB, pH, salt content, extraction temperature and time were investigated and optimized. Under the optimized conditions, the calibration curve for six flavonoids were all linear with the correlation coefficients greater than 0.9994. The intra-day and inter-day precision (RSD) were below 8.1% and the limits of detection (LOD) for the six flavonoids were 1.2-5.0 ng mL(-1) (S/N=3). The proposed method was successfully used to separate and determine the six flavonoids in A. venetum leaf samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. PDF approach for turbulent scalar field: Some recent developments

    NASA Technical Reports Server (NTRS)

    Gao, Feng

    1993-01-01

    The probability density function (PDF) method has been proven a very useful approach in turbulence research. It has been particularly effective in simulating turbulent reacting flows and in studying some detailed statistical properties generated by a turbulent field There are, however, some important questions that have yet to be answered in PDF studies. Our efforts in the past year have been focused on two areas. First, a simple mixing model suitable for Monte Carlo simulations has been developed based on the mapping closure. Secondly, the mechanism of turbulent transport has been analyzed in order to understand the recently observed abnormal PDF's of turbulent temperature fields generated by linear heat sources.

  15. Estimation of Thalamocortical and Intracortical Network Models from Joint Thalamic Single-Electrode and Cortical Laminar-Electrode Recordings in the Rat Barrel System

    PubMed Central

    Blomquist, Patrick; Devor, Anna; Indahl, Ulf G.; Ulbert, Istvan; Einevoll, Gaute T.; Dale, Anders M.

    2009-01-01

    A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population. PMID:19325875

  16. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    NASA Astrophysics Data System (ADS)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  17. Analysis system for characterisation of simple, low-cost microfluidic components

    NASA Astrophysics Data System (ADS)

    Smith, Suzanne; Naidoo, Thegaran; Nxumalo, Zandile; Land, Kevin; Davies, Emlyn; Fourie, Louis; Marais, Philip; Roux, Pieter

    2014-06-01

    There is an inherent trade-off between cost and operational integrity of microfluidic components, especially when intended for use in point-of-care devices. We present an analysis system developed to characterise microfluidic components for performing blood cell counting, enabling the balance between function and cost to be established quantitatively. Microfluidic components for sample and reagent introduction, mixing and dispensing of fluids were investigated. A simple inlet port plugging mechanism is used to introduce and dispense a sample of blood, while a reagent is released into the microfluidic system through compression and bursting of a blister pack. Mixing and dispensing of the sample and reagent are facilitated via air actuation. For these microfluidic components to be implemented successfully, a number of aspects need to be characterised for development of an integrated point-of-care device design. The functional components were measured using a microfluidic component analysis system established in-house. Experiments were carried out to determine: 1. the force and speed requirements for sample inlet port plugging and blister pack compression and release using two linear actuators and load cells for plugging the inlet port, compressing the blister pack, and subsequently measuring the resulting forces exerted, 2. the accuracy and repeatability of total volumes of sample and reagent dispensed, and 3. the degree of mixing and dispensing uniformity of the sample and reagent for cell counting analysis. A programmable syringe pump was used for air actuation to facilitate mixing and dispensing of the sample and reagent. Two high speed cameras formed part of the analysis system and allowed for visualisation of the fluidic operations within the microfluidic device. Additional quantitative measures such as microscopy were also used to assess mixing and dilution accuracy, as well as uniformity of fluid dispensing - all of which are important requirements towards the successful implementation of a blood cell counting system.

  18. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    ERIC Educational Resources Information Center

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  19. A novel, simplified ex vivo method for measuring water exchange performance of heat and moisture exchangers for tracheostomy application.

    PubMed

    van den Boer, Cindy; Muller, Sara H; Vincent, Andrew D; Züchner, Klaus; van den Brekel, Michiel W M; Hilgers, Frans J M

    2013-09-01

    Breathing through a tracheostomy results in insufficient warming and humidification of inspired air. This loss of air-conditioning can be partially compensated for with the application of a heat and moisture exchanger (HME) over the tracheostomy. In vitro (International Organization for Standardization [ISO] standard 9360-2:2001) and in vivo measurements of the effects of an HME are complex and technically challenging. The aim of this study was to develop a simple method to measure the ex vivo HME performance comparable with previous in vitro and in vivo results. HMEs were weighed at the end of inspiration and at the end of expiration at different breathing volumes. Four HMEs (Atos Medical, Hörby, Sweden) with known in vivo humidity and in vitro water loss values were tested. The associations between weight change, volume, and absolute humidity were determined using both linear and non-linear mixed effects models. The rating between the 4 HMEs by weighing correlated with previous intra-tracheal measurements (R(2) = 0.98), and the ISO standard (R(2) = 0.77). Assessment of the weight change between end of inhalation and end of exhalation is a valid and simple method of measuring the water exchange performance of an HME.

  20. An R2 statistic for fixed effects in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  1. Convex set and linear mixing model

    NASA Technical Reports Server (NTRS)

    Xu, P.; Greeley, R.

    1993-01-01

    A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.

  2. Numerical Study of Buoyancy and Different Diffusion Effects on the Structure and Dynamics of Triple Flames

    NASA Technical Reports Server (NTRS)

    Chen, Jyh-Yuan; Echekki, Tarek

    2001-01-01

    Numerical simulations of 2-D triple flames under gravity force have been implemented to identify the effects of gravity on triple flame structure and propagation properties and to understand the mechanisms of instabilities resulting from both heat release and buoyancy effects. A wide range of gravity conditions, heat release, and mixing widths for a scalar mixing layer are computed for downward-propagating (in the same direction with the gravity vector) and upward-propagating (in the opposite direction of the gravity vector) triple flames. Results of numerical simulations show that gravity strongly affects the triple flame speed through its contribution to the overall flow field. A simple analytical model for the triple flame speed, which accounts for both buoyancy and heat release, is developed. Comparisons of the proposed model with the numerical results for a wide range of gravity, heat release and mixing width conditions, yield very good agreement. The analysis shows that under neutral diffusion, downward propagation reduces the triple flame speed, while upward propagation enhances it. For the former condition, a critical Froude number may be evaluated, which corresponds to a vanishing triple flame speed. Downward-propagating triple flames at relatively strong gravity effects have exhibited instabilities. These instabilities are generated without any artificial forcing of the flow. Instead disturbances are initiated by minute round-off errors in the numerical simulations, and subsequently amplified by instabilities. A linear stability analysis on mean profiles of stable triple flame configurations have been performed to identify the most amplified frequency in spatially developed flows. The eigenfunction equations obtained from the linearized disturbance equations are solved using the shooting method. The linear stability analysis yields reasonably good agreements with the observed frequencies of the unstable triple flames. The frequencies and amplitudes of disturbances increase with the magnitude of the gravity vector. Moreover, disturbances appear to be most amplified just downstream of the premixed branches. The effects of mixing width and differential diffusion are investigated and their roles on the flame stability are studied.

  3. Mixed Beam Murine Harderian Gland Tumorigenesis: Predicted Dose-Effect Relationships if neither Synergism nor Antagonism Occurs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranart, Nopphon; Blakely, Eleanor A.; Cheng, Alden

    Complex mixed radiation fields exist in interplanetary space, and not much is known about their latent effects on space travelers. In silico synergy analysis default predictions are useful when planning relevant mixed-ion-beam experiments and interpreting their results. These predictions are based on individual dose-effect relationships (IDER) for each component of the mixed-ion beam, assuming no synergy or antagonism. For example, a default hypothesis of simple effect additivity has often been used throughout the study of biology. However, for more than a century pharmacologists interested in mixtures of therapeutic drugs have analyzed conceptual, mathematical and practical questions similar to those thatmore » arise when analyzing mixed radiation fields, and have shown that simple effect additivity often gives unreasonable predictions when the IDER are curvilinear. Various alternatives to simple effect additivity proposed in radiobiology, pharmacometrics, toxicology and other fields are also known to have important limitations. In this work, we analyze upcoming murine Harderian gland (HG) tumor prevalence mixed-beam experiments, using customized open-source software and published IDER from past single-ion experiments. The upcoming experiments will use acute irradiation and the mixed beam will include components of high atomic number and energy (HZE). We introduce a new alternative to simple effect additivity, "incremental effect additivity", which is more suitable for the HG analysis and perhaps for other end points. We use incremental effect additivity to calculate default predictions for mixture dose-effect relationships, including 95% confidence intervals. We have drawn three main conclusions from this work. 1. It is important to supplement mixed-beam experiments with single-ion experiments, with matching end point(s), shielding and dose timing. 2. For HG tumorigenesis due to a mixed beam, simple effect additivity and incremental effect additivity sometimes give default predictions that are numerically close. However, if nontargeted effects are important and the mixed beam includes a number of different HZE components, simple effect additivity becomes unusable and another method is needed such as incremental effect additivity. 3. Eventually, synergy analysis default predictions of the effects of mixed radiation fields will be replaced by more mechanistic, biophysically-based predictions. However, optimizing synergy analyses is an important first step. If mixed-beam experiments indicate little synergy or antagonism, plans by NASA for further experiments and possible missions beyond low earth orbit will be substantially simplified.« less

  4. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  5. Practical Session: Simple Linear Regression

    NASA Astrophysics Data System (ADS)

    Clausel, M.; Grégoire, G.

    2014-12-01

    Two exercises are proposed to illustrate the simple linear regression. The first one is based on the famous Galton's data set on heredity. We use the lm R command and get coefficients estimates, standard error of the error, R2, residuals …In the second example, devoted to data related to the vapor tension of mercury, we fit a simple linear regression, predict values, and anticipate on multiple linear regression. This pratical session is an excerpt from practical exercises proposed by A. Dalalyan at EPNC (see Exercises 1 and 2 of http://certis.enpc.fr/~dalalyan/Download/TP_ENPC_4.pdf).

  6. Negligible influence of spatial autocorrelation in the assessment of fire effects in a mixed conifer forest

    USGS Publications Warehouse

    van Mantgem, P.J.; Schwilk, D.W.

    2009-01-01

    Fire is an important feature of many forest ecosystems, although the quantification of its effects is compromised by the large scale at which fire occurs and its inherent unpredictability. A recurring problem is the use of subsamples collected within individual burns, potentially resulting in spatially autocorrelated data. Using subsamples from six different fires (and three unburned control areas) we show little evidence for strong spatial autocorrelation either before or after burning for eight measures of forest conditions (both fuels and vegetation). Additionally, including a term for spatially autocorrelated errors provided little improvement for simple linear models contrasting the effects of early versus late season burning. While the effects of spatial autocorrelation should always be examined, it may not always greatly influence assessments of fire effects. If high patch scale variability is common in Sierra Nevada mixed conifer forests, even following more than a century of fire exclusion, treatments designed to encourage further heterogeneity in forest conditions prior to the reintroduction of fire will likely be unnecessary.

  7. B-meson anomalies and Higgs physics in flavored U(1)' model

    NASA Astrophysics Data System (ADS)

    Bian, Ligong; Lee, Hyun Min; Park, Chan Beom

    2018-04-01

    We consider a simple extension of the Standard Model with flavor-dependent U(1)', that has been proposed to explain some of B-meson anomalies recently reported at LHCb. The U(1)' charge is chosen as a linear combination of anomaly-free B_3-L_3 and L_μ -L_τ . In this model, the flavor structure in the SM is restricted due to flavor-dependent U(1)' charges, in particular, quark mixings are induced by a small vacuum expectation value of the extra Higgs doublet. As a result, it is natural to get sizable flavor-violating Yukawa couplings of heavy Higgs bosons involving the bottom quark. In this article, we focus on the phenomenology of the Higgs sector of the model including extra Higgs doublet and singlet scalars. We impose various bounds on the extended Higgs sector from Higgs and electroweak precision data, B-meson mixings and decays as well as unitarity and stability bounds, then discuss the productions and decays of heavy Higgs bosons at the LHC.

  8. Unlocking Chain Exchange in Highly Amphiphilic Block Polymer Micellar Systems: Influence of Agitation.

    PubMed

    Murphy, Ryan P; Kelley, Elizabeth G; Rogers, Simon A; Sullivan, Millicent O; Epps, Thomas H

    2014-11-18

    Chain exchange between block polymer micelles in highly selective solvents, such as water, is well-known to be arrested under quiescent conditions, yet this work demonstrates that simple agitation methods can induce rapid chain exchange in these solvents. Aqueous solutions containing either pure poly(butadiene- b -ethylene oxide) or pure poly(butadiene- b -ethylene oxide- d 4 ) micelles were combined and then subjected to agitation by vortex mixing, concentric cylinder Couette flow, or nitrogen gas sparging. Subsequently, the extent of chain exchange between micelles was quantified using small angle neutron scattering. Rapid vortex mixing induced chain exchange within minutes, as evidenced by a monotonic decrease in scattered intensity, whereas Couette flow and sparging did not lead to measurable chain exchange over the examined time scale of hours. The linear kinetics with respect to agitation time suggested a surface-limited exchange process at the air-water interface. These findings demonstrate the strong influence of processing conditions on block polymer solution assemblies.

  9. Fisher information and asymptotic normality in system identification for quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin

    2011-06-15

    This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less

  10. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  11. Matrix Fatigue Cracking Mechanisms of Alpha(2) TMC for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Gayda, John

    1994-01-01

    The objective of this work was to understand matrix cracking mechanisms in a unidirectional alpha(sub 2) TMC in possible hypersonic applications. A (0)(sub 8) SCS-6/Ti-24Al-11Nb (at. percent) TMC was first subjected to a variety of simple isothermal and nonisothermal fatigue cycles to evaluate the damage mechanisms in simple conditions. A modified ascent mission cycle test was then performed to evaluate the combined effects of loading modes. This cycle mixes mechanical cycling at 150 and 483 C, sustained loads, and a slow thermal cycle to 815 C. At low cyclic stresses and strains more common in hypersonic applications, environment-assisted surface cracking limited fatigue resistance. This damage mechanism was most acute for out-of-phase nonisothermal cycles having extended cycle periods and the ascent mission cycle. A simple linear fraction damage model was employed to help understand this damage mechanism. Time-dependent environmental damage was found to strongly influence out-of-phase and mission life, with mechanical cycling damage due to the combination of external loading and CTE mismatch stresses playing a smaller role. The mechanical cycling and sustained loads in the mission cycle also had a smaller role.

  12. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  13. Exploiting symmetries in the modeling and analysis of tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Andersen, Carl M.; Tanner, John A.

    1987-01-01

    A simple and efficient computational strategy for reducing both the size of a tire model and the cost of the analysis of tires in the presence of symmetry-breaking conditions (unsymmetry in the tire material, geometry, or loading) is presented. The strategy is based on approximating the unsymmetric response of the tire with a linear combination of symmetric and antisymmetric global approximation vectors (or modes). Details are presented for the three main elements of the computational strategy, which include: use of special three-field mixed finite-element models, use of operator splitting, and substantial reduction in the number of degrees of freedom. The proposed computational stategy is applied to three quasi-symmetric problems of tires: linear analysis of anisotropic tires, through use of semianalytic finite elements, nonlinear analysis of anisotropic tires through use of two-dimensional shell finite elements, and nonlinear analysis of orthotropic tires subjected to unsymmetric loading. Three basic types of symmetry (and their combinations) exhibited by the tire response are identified.

  14. Estimation of evaporation from equilibrium diurnal boundary layer humidity

    NASA Astrophysics Data System (ADS)

    Salvucci, G.; Rigden, A. J.; Li, D.; Gentine, P.

    2017-12-01

    Simplified conceptual models of the convective boundary layer as a well mixed profile of potential temperature (theta) and specific humidity (q) impinging on an initially stably stratified linear potential temperature profile have a long history in atmospheric sciences. These one dimensional representations of complex mixing are useful for gaining insights into land-atmosphere interactions and for prediction when state of the art LES approaches are infeasible. As previously shown (e.g. Betts), if one neglects the role of q in bouyancy, the framework yields a unique relation between mixed layer Theta, mixed layer height (h), and cumulative sensible heat flux (SH) throughout the day. Similarly assuming an initially q profile yields a simple relation between q, h, and cumulative latent heat flux (LH). The diurnal dynamics of theta and q are strongly dependent on SH and the initial lapse rates of theta (gamma_thet) and q (gamma q). In the estimation method proposed here, we further constrain these relations with two more assumptions: 1) The specific humidity is the same at the start of the period of boundary layer growth and at the collapse; and 2) Once the mixed layer reaches the LCL, further drying occurs proportionally to the deardorff convective velocity scale (omega) multiplied by q. Assumption (1) is based on the idea that below the cloud layer, there are no sinks of moisture within the mixed layer (neglecting lateral humidity divergence). Thus the net mixing of dry air aloft with evaporation from the surface must balance. Inclusion of the simple model of moisture loss above the LCL into the bulk-CBL model allows definition of an equilibrium humidity (q) condition at which the diurnal cycle of q repeats (i.e. additions of q from surface balance entrainment of dry air from above). Surprisingly, this framework allows estimation of LH from q, theta, and estimated net radiation by solving for the value of Evaporative Fraction (EF) for which the diurnal cycle of q repeats. Three parameters need specification: cloud area fraction, entrainment factor, and morning lapse rate. Surprisingly, a single set of values for these parameters are adequate to estimate EF at over 70 tested Ameriflux sites to within about 20%, though improvements are gained using a single regression model for gamma_thet that has been fitted to radiosonde data.

  15. Design of a new static micromixer having simple structure and excellent mixing performance.

    PubMed

    Kamio, Eiji; Ono, Tsutomu; Yoshizawa, Hidekazu

    2009-06-21

    A novel micromixer with simple construction and excellent mixing performance is developed. The micromixer is composed of two stainless steel tubes with different diameters: one is an outer tube and another is an inner tube which fits in the outer tube. In this micromixer, one reactant fluid flows in the mixing zone from the inner tube and the other flows from the outer tube. The excellent mixing performance is confirmed by comparing the results of a Villermaux/Dushman reaction with those for the other micromixers. The developed micromixer has a mixing cascade with multiple means and an asymmetric structure to achieve effective mixing. The excellent mixing performance of the developed micromixer suggests that serial addition of multiple phenomena for mixing will give us an efficient micromixing.

  16. Multilevel modeling and panel data analysis in educational research (Case study: National examination data senior high school in West Java)

    NASA Astrophysics Data System (ADS)

    Zulvia, Pepi; Kurnia, Anang; Soleh, Agus M.

    2017-03-01

    Individual and environment are a hierarchical structure consist of units grouped at different levels. Hierarchical data structures are analyzed based on several levels, with the lowest level nested in the highest level. This modeling is commonly call multilevel modeling. Multilevel modeling is widely used in education research, for example, the average score of National Examination (UN). While in Indonesia UN for high school student is divided into natural science and social science. The purpose of this research is to develop multilevel and panel data modeling using linear mixed model on educational data. The first step is data exploration and identification relationships between independent and dependent variable by checking correlation coefficient and variance inflation factor (VIF). Furthermore, we use a simple model approach with highest level of the hierarchy (level-2) is regency/city while school is the lowest of hierarchy (level-1). The best model was determined by comparing goodness-of-fit and checking assumption from residual plots and predictions for each model. Our finding that for natural science and social science, the regression with random effects of regency/city and fixed effects of the time i.e multilevel model has better performance than the linear mixed model in explaining the variability of the dependent variable, which is the average scores of UN.

  17. Passive micromixers and organic electrochemical transistors for biosensor applications

    NASA Astrophysics Data System (ADS)

    Kanakamedala, Senaka Krishna

    Fluid handling at the microscale has greatly affected different fields such as biomedical, pharmaceutical, biochemical engineering and environmental monitoring due to its reduced reagent consumption, portability, high throughput, lower hardware cost and shorter analysis time compared to large devices. The challenges associated with mixing of fluids in microscale enabled us in designing, simulating, fabricating and characterizing various micromixers on silicon and flexible polyester substrates. The mixing efficiency was evaluated by injecting the fluids through the two inlets and collecting the sample at outlet. The images collected from the microscope were analyzed, and the absorbance of the color product at the outlet was measured to quantify the mixing efficacy. A mixing efficiency of 96% was achieved using a flexible disposable micromixer. The potential for low-cost processing and the device response tuning using chemical doping or synthesis opened doorways to use organic semiconductor devices as transducers in chemical and biological sensor applications. A simple, inexpensive organic electrochemical transistor (OECT) based on conducting polymer poly(3,4- ethyelenedioxythiphene) poly(styrene sulfonate) (PEDOT:PSS) was fabricated using a novel one step fabrication method. The developed transistor was used as a biosensor to detect glucose and glutamate. The developed glucose sensor showed a linear response for the glucose levels ranging from 1 muM-10 mM and showed a decent response for the glucose levels similar to those found in human saliva and to detect glutamate released from brain tumor cells. The developed glutamate sensor was used to detect the glutamate released from astrocytes and glioma cells after stimulation, and the results are compared with fluorescent spectrophotometer. The developed sensors employ simple fabrication, operate at low potentials, utilize lower enzyme concentrations, do not employ enzyme immobilization techniques, require only 5 muL of both enzyme and sample to be tested and show a stable response for a wide pH ranging from 4 to 9.

  18. Simple taper: Taper equations for the field forester

    Treesearch

    David R. Larsen

    2017-01-01

    "Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...

  19. Using Simple Linear Regression to Assess the Success of the Montreal Protocol in Reducing Atmospheric Chlorofluorocarbons

    ERIC Educational Resources Information Center

    Nelson, Dean

    2009-01-01

    Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…

  20. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed Central

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-01-01

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. PMID:28952708

  1. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    PubMed

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (p<0.0001) and residence groups had significant interactions with gender, age group, education level, and employment status (p<0.05). Multiple logistic regression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  2. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  3. Investigation of the annealing temperature dependence of the spin pumping in Co20Fe60B20/Pt systems

    NASA Astrophysics Data System (ADS)

    Belmeguenai, M.; Aitoukaci, K.; Zighem, F.; Gabor, M. S.; Petrisor, T.; Mos, R. B.; Tiusan, C.

    2018-03-01

    Co20Fe60B20/Pt systems with variable thicknesses of Co20Fe60B20 and of Pt have been sputtered and then annealed at various temperatures (Ta) up to 300 °C. Microstrip line ferromagnetic resonance (MS-FMR) has been used to investigate Co20Fe60B20 and Pt thickness dependencies of the magnetic damping enhancement due to the spin pumping. Using diffusion and ballistic models for spin pumping, the spin mixing conductance and the spin diffusion length have been deduced from the Co20Fe60B20 and the Pt thickness dependencies of the Gilbert damping parameter α of the Co20Fe60B20/Pt heterostructures, respectively. Within the ballistic simple model, both the spin mixing conductance at the CoFeB/Pt interface and the spin-diffusion length of Pt increase with the increasing annealing temperature and show a strong enhancement at 300 °C annealing temperature. In contrast, the spin mixing conductance, which increases with Ta, shows a different trend to the spin diffusion length when using the diffusion model. Moreover, MS-FMR measurements revealed that the effective magnetization varies linearly with the Co20Fe60B20 inverse thickness due to the perpendicular interface anisotropy, which is found to decrease as the annealing temperature increases. It also revealed that the angular dependence of the resonance field is governed by small uniaxial anisotropy which is found to vary linearly with the Co20Fe60B20 inverse thickness of the annealed films, in contrast to that of the as grown ones.

  4. Hot-spot mix in ignition-scale implosions on the NIF [Hot-spot mix in ignition-scale implosions on the National Ignition Facility (NIF)

    DOE PAGES

    Regan, S. P.; Epstein, R.; Hammel, B. A.; ...

    2012-03-30

    Ignition of an inertial confinement fusion (ICF) target depends on the formation of a central hot spot with sufficient temperature and areal density. Radiative and conductive losses from the hot spot can be enhanced by hydrodynamic instabilities. The concentric spherical layers of current National Ignition Facility (NIF) ignition targets consist of a plastic ablator surrounding 2 a thin shell of cryogenic thermonuclear fuel (i.e., hydrogen isotopes), with fuel vapor filling the interior volume. The Rev. 5 ablator is doped with Ge to minimize preheat of the ablator closest to the DT ice caused by Au M-band emission from the hohlraummore » x-ray drive. Richtmyer–Meshkov and Rayleigh–Taylor hydrodynamic instabilities seeded by high-mode (50 < t < 200) ablator-surface perturbations can cause Ge-doped ablator to mix into the interior of the shell at the end of the acceleration phase. As the shell decelerates, it compresses the fuel vapor, forming a hot spot. K-shell line emission from the ionized Ge that has penetrated into the hot spot provides an experimental signature of hot-spot mix. The Ge emission from tritium–hydrogen–deuterium (THD) and DT cryogenic targets and gas-filled plastic shell capsules, which replace the THD layer with a massequivalent CH layer, was examined. The inferred amount of hot-spot mix mass, estimated from the Ge K-shell line brightness using a detailed atomic physics code, is typically below the 75 ng allowance for hot-spot mix. Furthermore, predictions of a simple mix model, based on linear growth of the measured surface-mass modulations, are consistent with the experimental results.« less

  5. Hot-spot mix in ignition-scale implosions on the NIF [Hot-spot mix in ignition-scale implosions on the National Ignition Facility (NIF)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regan, S. P.; Epstein, R.; Hammel, B. A.

    Ignition of an inertial confinement fusion (ICF) target depends on the formation of a central hot spot with sufficient temperature and areal density. Radiative and conductive losses from the hot spot can be enhanced by hydrodynamic instabilities. The concentric spherical layers of current National Ignition Facility (NIF) ignition targets consist of a plastic ablator surrounding 2 a thin shell of cryogenic thermonuclear fuel (i.e., hydrogen isotopes), with fuel vapor filling the interior volume. The Rev. 5 ablator is doped with Ge to minimize preheat of the ablator closest to the DT ice caused by Au M-band emission from the hohlraummore » x-ray drive. Richtmyer–Meshkov and Rayleigh–Taylor hydrodynamic instabilities seeded by high-mode (50 < t < 200) ablator-surface perturbations can cause Ge-doped ablator to mix into the interior of the shell at the end of the acceleration phase. As the shell decelerates, it compresses the fuel vapor, forming a hot spot. K-shell line emission from the ionized Ge that has penetrated into the hot spot provides an experimental signature of hot-spot mix. The Ge emission from tritium–hydrogen–deuterium (THD) and DT cryogenic targets and gas-filled plastic shell capsules, which replace the THD layer with a massequivalent CH layer, was examined. The inferred amount of hot-spot mix mass, estimated from the Ge K-shell line brightness using a detailed atomic physics code, is typically below the 75 ng allowance for hot-spot mix. Furthermore, predictions of a simple mix model, based on linear growth of the measured surface-mass modulations, are consistent with the experimental results.« less

  6. Analytical Studies on the Synchronization of a Network of Linearly-Coupled Simple Chaotic Systems

    NASA Astrophysics Data System (ADS)

    Sivaganesh, G.; Arulgnanam, A.; Seethalakshmi, A. N.; Selvaraj, S.

    2018-05-01

    We present explicit generalized analytical solutions for a network of linearly-coupled simple chaotic systems. Analytical solutions are obtained for the normalized state equations of a network of linearly-coupled systems driven by a common chaotic drive system. Two parameter bifurcation diagrams revealing the various hidden synchronization regions, such as complete, phase and phase-lag synchronization are identified using the analytical results. The synchronization dynamics and their stability are studied using phase portraits and the master stability function, respectively. Further, experimental results for linearly-coupled simple chaotic systems are presented to confirm the analytical results. The synchronization dynamics of a network of chaotic systems studied analytically is reported for the first time.

  7. Sensitive SERS detection of lead ions via DNAzyme based quadratic signal amplification.

    PubMed

    Tian, Aihua; Liu, Yu; Gao, Jian

    2017-08-15

    Highly sensitive detection of Pb 2+ is very necessary for water quality control, clinical toxicology, and industrial monitoring. In this work, a simple and novel DNAzyme-based SERS quadratic amplification method is developed for the detection of Pb 2+ . This strategy possesses some remarkable features compared to the conventional DNAzyme-based SERS methods, which are as follows: (i) Coupled DNAzyme-activated hybridization chain reaction (HCR) with bio barcodes; a quadratic amplification method is designed using the unique catalytic selectivity of DNAzyme. The SERS signal is significantly amplified. This method is rapid with a detection time of 2h. (ii) The problem of high background induced by excess bio barcodes is circumvented by using magnetic beads (MBs) as the carrier of signal-output products, and this sensing system is simple in design and can easily be carried out by simple mixing and incubation. Given the unique and attractive characteristics, a simple and universal strategy is designed to accomplish sensitive detection of Pb 2+ . The detection limit of Pb 2+ via SERS detection is 70 fM, with the linear range from 1.0×10 -13 M to 1.0×10 -7 M. The method can be further extended to the quantitative detection of a variety of targets by replacing the lead-responsive DNAzyme with other functional DNA. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    NASA Astrophysics Data System (ADS)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  9. Criteria for equality in two entropic inequalities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirokov, M. E., E-mail: msh@mi.ras.ru

    2014-07-31

    We obtain a simple criterion for local equality between the constrained Holevo capacity and the quantum mutual information of a quantum channel. This shows that the set of all states for which this equality holds is determined by the kernel of the channel (as a linear map). Applications to Bosonic Gaussian channels are considered. It is shown that for a Gaussian channel having no completely depolarizing components the above characteristics may coincide only at non-Gaussian mixed states and a criterion for the existence of such states is given. All the obtained results may be reformulated as conditions for equality betweenmore » the constrained Holevo capacity of a quantum channel and the input von Neumann entropy. Bibliography: 20 titles. (paper)« less

  10. Estimating linear temporal trends from aggregated environmental monitoring data

    USGS Publications Warehouse

    Erickson, Richard A.; Gray, Brian R.; Eager, Eric A.

    2017-01-01

    Trend estimates are often used as part of environmental monitoring programs. These trends inform managers (e.g., are desired species increasing or undesired species decreasing?). Data collected from environmental monitoring programs is often aggregated (i.e., averaged), which confounds sampling and process variation. State-space models allow sampling variation and process variations to be separated. We used simulated time-series to compare linear trend estimations from three state-space models, a simple linear regression model, and an auto-regressive model. We also compared the performance of these five models to estimate trends from a long term monitoring program. We specifically estimated trends for two species of fish and four species of aquatic vegetation from the Upper Mississippi River system. We found that the simple linear regression had the best performance of all the given models because it was best able to recover parameters and had consistent numerical convergence. Conversely, the simple linear regression did the worst job estimating populations in a given year. The state-space models did not estimate trends well, but estimated population sizes best when the models converged. We found that a simple linear regression performed better than more complex autoregression and state-space models when used to analyze aggregated environmental monitoring data.

  11. A vine copula mixed effect model for trivariate meta-analysis of diagnostic test accuracy studies accounting for disease prevalence.

    PubMed

    Nikoloulopoulos, Aristidis K

    2017-10-01

    A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.

  12. New Hampshire binder and mix review.

    DOT National Transportation Integrated Search

    2012-08-01

    This review was initiated to compare relative rut testing and simple performance tests (now known as Asphalt Mix : Performance Tests) for the New Hampshire inch mix with 15% Recycled Asphalt Pavement (RAP). The tested mixes were : made from ...

  13. Single molecule studies of flexible polymers under shear and mixed flows

    NASA Astrophysics Data System (ADS)

    Teixeira, Rodrigo Esquivel

    We combine manipulation and single molecule visualization of flexible DNA polymers with the generation of controlled simple shear and planar mixed flows for the investigation of polymer flow physics. With the ability to observe polymer conformation directly and follow its evolution in both dilute and entangled regimes we provide a direct test for molecular models. The coil-stretch transition of polymer extension was investigated in planar mixed flows approaching simple shear. Visualization of individual molecules revealed a sharp coil-stretch transition in the steady-state length of the polymer with increasing strain rate in flows slightly more straining than rotational. In slightly more rotational flows significant transient polymer deformation was observed. Next, dilute polymers were visualized in the flow-gradient plane of a steady shear flow. By exploiting the linear proportionality between polymer mass and image intensity, the radius of gyration tensor elements ( Gij) were measured over time. Then, the Giesekus stress tensor was used to obtain the bulk shear viscosity and first normal stress coefficient, thus performing rheology measurements from single molecule conformations. End-over-end tumbling was discovered for the first time, confirming a long-standing prediction and numerous single-chain computer simulation studies. The tumbling frequency followed Wi0.62, and an equation derived from simple advection and diffusion arguments was able to reproduce these observations. Power spectral densities of chain orientation trajectories were found to be single-peaked around the tumbling frequency, thus suggesting a periodic character for polymer dynamics. Finally, we investigated well-entangled polymer solutions. Identical preparations were used in both rheological characterizations and single molecule observations under a variety of shear flow histories. Polymer extension relaxations after the cessation of a fast shear flow revealed two intrinsic characteristic times. The fast one was insensitive to concentration and at least an order of magnitude larger than the Rouse time presupposed by theoretical treatments. The slow timescale grew steeply with concentration, in qualitative agreement with theory. Transient and steady shear flows showed vastly different conformations even among identical molecules subjected to identical flow histories. This "molecular individualism" of well-entangled solutions and its broad conformational distributions calls into question the validity of preaveraging approximations made in molecular-level theories.

  14. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  15. Mixed H2/Hinfinity output-feedback control of second-order neutral systems with time-varying state and input delays.

    PubMed

    Karimi, Hamid Reza; Gao, Huijun

    2008-07-01

    A mixed H2/Hinfinity output-feedback control design methodology is presented in this paper for second-order neutral linear systems with time-varying state and input delays. Delay-dependent sufficient conditions for the design of a desired control are given in terms of linear matrix inequalities (LMIs). A controller, which guarantees asymptotic stability and a mixed H2/Hinfinity performance for the closed-loop system of the second-order neutral linear system, is then developed directly instead of coupling the model to a first-order neutral system. A Lyapunov-Krasovskii method underlies the LMI-based mixed H2/Hinfinity output-feedback control design using some free weighting matrices. The simulation results illustrate the effectiveness of the proposed methodology.

  16. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  17. MULTIVARIATE LINEAR MIXED MODELS FOR MULTIPLE OUTCOMES. (R824757)

    EPA Science Inventory

    We propose a multivariate linear mixed (MLMM) for the analysis of multiple outcomes, which generalizes the latent variable model of Sammel and Ryan. The proposed model assumes a flexible correlation structure among the multiple outcomes, and allows a global test of the impact of ...

  18. System and method for investigating sub-surface features of a rock formation with acoustic sources generating coded signals

    DOEpatents

    Vu, Cung Khac; Nihei, Kurt; Johnson, Paul A; Guyer, Robert; Ten Cate, James A; Le Bas, Pierre-Yves; Larmat, Carene S

    2014-12-30

    A system and a method for investigating rock formations includes generating, by a first acoustic source, a first acoustic signal comprising a first plurality of pulses, each pulse including a first modulated signal at a central frequency; and generating, by a second acoustic source, a second acoustic signal comprising a second plurality of pulses. A receiver arranged within the borehole receives a detected signal including a signal being generated by a non-linear mixing process from the first-and-second acoustic signal in a non-linear mixing zone within the intersection volume. The method also includes-processing the received signal to extract the signal generated by the non-linear mixing process over noise or over signals generated by a linear interaction process, or both.

  19. The effects of mixotrophy on the stability and dynamics of a simple planktonic food web

    USGS Publications Warehouse

    Jost, Christian; Lawrence, Cathryn A.; Campolongo, Francesca; Wouter, van de Bund; Hill, Sheryl; DeAngelis, Donald L.

    2004-01-01

    Recognition of the microbial loop as an important part of aquatic ecosystems disrupted the notion of simple linear food chains. However, current research suggests that even the microbial loop paradigm is a gross simplification of microbial interactions due to the presence of mixotrophs—organisms that both photosynthesize and graze. We present a simple food web model with four trophic species, three of them arranged in a food chain (nutrients–autotrophs–herbivores) and the fourth as a mixotroph with links to both the nutrients and the autotrophs. This model is used to study the general implications of inclusion of the mixotrophic link in microbial food webs and the specific predictions for a parameterization that describes open ocean mixed layer plankton dynamics. The analysis indicates that the system parameters reside in a region of the parameter space where the dynamics converge to a stable equilibrium rather than displaying periodic or chaotic solutions. However, convergence requires weeks to months, suggesting that the system would never reach equilibrium in the ocean due to alteration of the physical forcing regime. Most importantly, the mixotrophic grazing link seems to stabilize the system in this region of the parameter space, particularly when nutrient recycling feedback loops are included.

  20. Analysis of longitudinal diffusion-weighted images in healthy and pathological aging: An ADNI study.

    PubMed

    Kruggel, Frithjof; Masaki, Fumitaro; Solodkin, Ana

    2017-02-15

    The widely used framework of voxel-based morphometry for analyzing neuroimages is extended here to model longitudinal imaging data by exchanging the linear model with a linear mixed-effects model. The new approach is employed for analyzing a large longitudinal sample of 756 diffusion-weighted images acquired in 177 subjects of the Alzheimer's Disease Neuroimaging initiative (ADNI). While sample- and group-level results from both approaches are equivalent, the mixed-effect model yields information at the single subject level. Interestingly, the neurobiological relevance of the relevant parameter at the individual level describes specific differences associated with aging. In addition, our approach highlights white matter areas that reliably discriminate between patients with Alzheimer's disease and healthy controls with a predictive power of 0.99 and include the hippocampal alveus, the para-hippocampal white matter, the white matter of the posterior cingulate, and optic tracts. In this context, notably the classifier includes a sub-population of patients with minimal cognitive impairment into the pathological domain. Our classifier offers promising features for an accessible biomarker that predicts the risk of conversion to Alzheimer's disease. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.usc.edu/wp-content/uploads/how to apply/ADNI Acknowledgement List.pdf. Significance statement This study assesses neuro-degenerative processes in the brain's white matter as revealed by diffusion-weighted imaging, in order to discriminate healthy from pathological aging in a large sample of elderly subjects. The analysis of time-series examinations in a linear mixed effects model allowed the discrimination of population-based aging processes from individual determinants. We demonstrate that a simple classifier based on white matter imaging data is able to predict the conversion to Alzheimer's disease with a high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Spatial and temporal variability of total organic carbon along 140°W in the equatorial Pacific Ocean in 1992

    NASA Astrophysics Data System (ADS)

    Peltzer, Edward T.; Hayward, Nancy A.

    Total organic carbon (TOC) was analyzed on four transects along 140°W in 1992 using a high temperature combustion/discrete injection (HTC/DI) analyzer. For two of the transects, the analyses were conducted on-board ship. Mixed-layer concentrations of organic carbon varied from about 80 μM C at either end of the transect (12°N and 12°S) to about 60 μM C at the equator. Total organic carbon concentrations decreased rapidly below the mixed-layer to about 38-40 μM C at 1000 m across the transect. Little variation was observed below this depth; deep water concentrations below 2000m were virtually monotonic at about 36 μM C. Repeat measurements made on subsequent cruises consistently found the same concentrations at 1000 m or deeper, but substantial variations were observed in the mixed-layer and the upper water column above 400 m depth. Linear mixing models of total organic carbon versus σθ exhibited zones of organic carbon formation and consumption. TOC was found to be inversely correlated with apparent oxygen utilization (AOU) in the region between the mixed-layer and the oxygen minimum. In the mixed-layer, TOC concentrations varied seasonally. Part of the variations in TOC at the equator was driven by changes in the upwelling rate in response to variations in physical forcing related to an El Niño and to the passage of tropical instability waves. TOC export fluxes, calculated from simple box models, averaged 8±4 mmol C m -2day -1 at the equator and also varied seasonally. These export fluxes account for 50-75% of the total carbon deficit and are consistent with other estimates and model predictions.

  2. A Simple Experiment for Teaching Process Intensification by Static Mixing in Chemical Reaction Engineering

    ERIC Educational Resources Information Center

    Baz-Rodríguez, Sergio; Herrera-Soberanis, Natali; Rodríguez-Novelo, Miguel; Guillén-Francisc, Juana; Rocha-Uribe, José

    2016-01-01

    An experiment for teaching mixing intensification in reaction engineering is described. For this, a simple tubular reactor was constructed; helical static mixer elements were fabricated from stainless steel strips and inserted into the reactor. With and without the internals, the equipment operates as a static mixer reactor or a laminar flow…

  3. Analyzing Longitudinal Data with Multilevel Models: An Example with Individuals Living with Lower Extremity Intra-articular Fractures

    PubMed Central

    Kwok, Oi-Man; Underhill, Andrea T.; Berry, Jack W.; Luo, Wen; Elliott, Timothy R.; Yoon, Myeongsun

    2008-01-01

    The use and quality of longitudinal research designs has increased over the past two decades, and new approaches for analyzing longitudinal data, including multi-level modeling (MLM) and latent growth modeling (LGM), have been developed. The purpose of this paper is to demonstrate the use of MLM and its advantages in analyzing longitudinal data. Data from a sample of individuals with intra-articular fractures of the lower extremity from the University of Alabama at Birmingham’s Injury Control Research Center is analyzed using both SAS PROC MIXED and SPSS MIXED. We start our presentation with a discussion of data preparation for MLM analyses. We then provide example analyses of different growth models, including a simple linear growth model and a model with a time-invariant covariate, with interpretation for all the parameters in the models. More complicated growth models with different between- and within-individual covariance structures and nonlinear models are discussed. Finally, information related to MLM analysis such as online resources is provided at the end of the paper. PMID:19649151

  4. Generalized Multilevel Structural Equation Modeling

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew

    2004-01-01

    A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…

  5. Chemiluminescent Determination of Oxamyl in Drinking Water and Tomato Using Online Postcolumn UV Irradiation in a Chromatographic System.

    PubMed

    Murillo Pulgarín, José A; García Bermejo, Luisa F; Durán, Armando Carrasquero

    2018-03-07

    High-performance liquid chromatography (HPLC) was used to separate oxamyl from other pesticides in drinking water and tomato paste. The eluate emerging from the column tail was mixed with an alkaline solution of Co 2+ in EDTA and irradiated with UV light to induce photolysis of the carbamate in order to obtain free radicals and other reactive species that oxidize luminol and produce chemiluminescence (CL) as a result. The intensity of the CL signal was monitored in the form of chromatographic peaks. Under the optimum operating conditions for the HPLC-UV-CL system, the analyte concentration was linearly related to peak area. The limit of detection as determined in accordance with the IUPAC criterion was 0.17 mg L -1 . Oxamyl was successfully extracted with recoveries of 88.7-103.1% from spiked tomato paste by using a simple QuEChERS (Quick, Easy, Cheap, Effective, Rugged, and Safe) sample preparation approach. Similar recoveries were obtained from drinking water samples spiked with oxamyl concentrations above the LOD. The proposed method is a simple, fast, accurate choice for quantifying this pesticide.

  6. A simple and selective method for determination of phthalate biomarkers in vegetable samples by high pressure liquid chromatography-electrospray ionization-tandem mass spectrometry.

    PubMed

    Zhou, Xi; Cui, Kunyan; Zeng, Feng; Li, Shoucong; Zeng, Zunxiang

    2016-06-01

    In the present study, solid-phase extraction cartridges including silica reversed-phase Isolute C18, polymeric reversed-phase Oasis HLB and mixed-mode anion-exchange Oasis MAX, and liquid-liquid extractions with ethyl acetate, n-hexane, dichloromethane and its mixtures were compared for clean-up of phthalate monoesters from vegetable samples. Best recoveries and minimised matrix effects were achieved using ethyl acetate/n-hexane liquid-liquid extraction for these target compounds. A simple and selective method, based on sample preparation by ultrasonic extraction and liquid-liquid extraction clean-up, for the determination of phthalate monoesters in vegetable samples by liquid chromatography/electrospray ionisation-tandem mass spectrometry was developed. The method detection limits for phthalate monoesters ranged from 0.013 to 0.120 ng g(-1). Good linearity (r(2)>0.991) between MQLs and 1000× MQLs was achieved. The intra- and inter-day relative standard deviation values were less than 11.8%. The method was successfully used to determine phthalate monoester metabolites in the vegetable samples. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. On the nature of fast sausage waves in coronal loops

    NASA Astrophysics Data System (ADS)

    Bahari, Karam

    2018-05-01

    The effect of the parameters of coronal loops on the nature of fast sausage waves are investigated. To do this three models of the coronal loop considered, a simple loop model, a current-carrying loop model and a model with radially structured density called "Inner μ" profile. For all the models the Magnetohydrodynamic (MHD) equations solved analytically in the linear approximation and the restoring forces of oscillations obtained. The ratio of the magnetic tension force to the pressure gradient force obtained as a function of the distance from the axis of the loop. In the simple loop model for all values of the loop parameters the fast sausages wave have a mixed nature of Alfvénic and fast MHD waves, in the current-carrying loop model with thick annulus and low density contrast the fast sausage waves can be considered as purely Alfvénic wave in the core region of the loop, and in the "Inner μ" profile for each set of the parameters of the loop the wave can be considered as a purely Alfvénic wave in some regions of the loop.

  8. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  9. Sustainability of a Compartmentalized Host-Parasite Replicator System under Periodic Washout-Mixing Cycles

    PubMed Central

    Furubayashi, Taro

    2018-01-01

    The emergence and dominance of parasitic replicators are among the major hurdles for the proliferation of primitive replicators. Compartmentalization of replicators is proposed to relieve the parasite dominance; however, it remains unclear under what conditions simple compartmentalization uncoupled with internal reaction secures the long-term survival of a population of primitive replicators against incessant parasite emergence. Here, we investigate the sustainability of a compartmentalized host-parasite replicator (CHPR) system undergoing periodic washout-mixing cycles, by constructing a mathematical model and performing extensive simulations. We describe sustainable landscapes of the CHPR system in the parameter space and elucidate the mechanism of phase transitions between sustainable and extinct regions. Our findings revealed that a large population size of compartments, a high mixing intensity, and a modest amount of nutrients are important factors for the robust survival of replicators. We also found two distinctive sustainable phases with different mixing intensities. These results suggest that a population of simple host–parasite replicators assumed before the origin of life can be sustained by a simple compartmentalization with periodic washout-mixing processes. PMID:29373536

  10. Differential solvation of intrinsically disordered linkers drives the formation of spatially organized droplets in ternary systems of linear multivalent proteins

    NASA Astrophysics Data System (ADS)

    Harmon, Tyler S.; Holehouse, Alex S.; Pappu, Rohit V.

    2018-04-01

    Intracellular biomolecular condensates are membraneless organelles that encompass large numbers of multivalent protein and nucleic acid molecules. The bodies assemble via a combination of liquid–liquid phase separation and gelation. A majority of condensates included multiple components and show multilayered organization as opposed to being well-mixed unitary liquids. Here, we put forward a simple thermodynamic framework to describe the emergence of spatially organized droplets in multicomponent systems comprising of linear multivalent polymers also known as associative polymers. These polymers, which mimic proteins and/or RNA have the architecture of domains or motifs known as stickers that are interspersed by flexible spacers known as linkers. Using a minimalist numerical model for a four-component system, we have identified features of linear multivalent molecules that are necessary and sufficient for generating spatially organized droplets. We show that differences in sequence-specific effective solvation volumes of disordered linkers between interaction domains enable the formation of spatially organized droplets. Molecules with linkers that are preferentially solvated are driven to the interface with the bulk solvent, whereas molecules that have linkers with negligible effective solvation volumes form cores in the core–shell architectures that emerge in the minimalist four-component systems. Our modeling has relevance for understanding the physical determinants of spatially organized membraneless organelles.

  11. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-06-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z. The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  12. Global Estimates of Errors in Quantum Computation by the Feynman-Vernon Formalism

    NASA Astrophysics Data System (ADS)

    Aurell, Erik

    2018-04-01

    The operation of a quantum computer is considered as a general quantum operation on a mixed state on many qubits followed by a measurement. The general quantum operation is further represented as a Feynman-Vernon double path integral over the histories of the qubits and of an environment, and afterward tracing out the environment. The qubit histories are taken to be paths on the two-sphere S^2 as in Klauder's coherent-state path integral of spin, and the environment is assumed to consist of harmonic oscillators initially in thermal equilibrium, and linearly coupled to to qubit operators \\hat{S}_z . The environment can then be integrated out to give a Feynman-Vernon influence action coupling the forward and backward histories of the qubits. This representation allows to derive in a simple way estimates that the total error of operation of a quantum computer without error correction scales linearly with the number of qubits and the time of operation. It also allows to discuss Kitaev's toric code interacting with an environment in the same manner.

  13. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Time and frequency domain analysis of sampled data controllers via mixed operation equations

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1981-01-01

    Specification of the mathematical equations required to define the dynamic response of a linear continuous plant, subject to sampled data control, is complicated by the fact that the digital components of the control system cannot be modeled via linear ordinary differential equations. This complication can be overcome by introducing two new mathematical operations; namely, the operation of zero order hold and digial delay. It is shown that by direct utilization of these operations, a set of linear mixed operation equations can be written and used to define the dynamic response characteristics of the controlled system. It also is shown how these linear mixed operation equations lead, in an automatable manner, directly to a set of finite difference equations which are in a format compatible with follow on time and frequency domain analysis methods.

  15. Final Report---Optimization Under Nonconvexity and Uncertainty: Algorithms and Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeff Linderoth

    2011-11-06

    the goal of this work was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems. The focus of the work done in the continuation was on Mixed Integer Nonlinear Programs (MINLP)s and Mixed Integer Linear Programs (MILP)s, especially those containing a great deal of symmetry.

  16. A multiphase non-linear mixed effects model: An application to spirometry after lung transplantation.

    PubMed

    Rajeswaran, Jeevanantham; Blackstone, Eugene H

    2017-02-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time-varying coefficients.

  17. The fidelity of paleomagnetic records carried by magnetosome chains

    NASA Astrophysics Data System (ADS)

    Paterson, Greig; Wang, Yinzhao; Pan, Yongxin

    2013-04-01

    Magnetotactic bacteria (MTB) and their fossilized magnetosomes are being increasingly identified in geological records from a broad range of environments and are believed to be a dominant carrier of magnetic remanence in sediments. Despite their prevalence, little is known about how well chains of biomineralized magnetic particles record the geomagnetic field. Using cultured Magnetospirillum magneticum strain AMB-1, we have conducted simple 2D (i.e., zero inclination) experiments to simulate NRM acquisition in order to assess the efficiency with which magnetosome chains align along magnetic field lines and the implications that this has for paleomagnetic records. Our results indicate that the NRM acquired by deposited MTB is near linear with the applied field, but that deviations from linearity up to 10% are discernible at high fields (120 μT). This slight non-linearity is propagated through into the calculation of both ARM and IRM normalized relative paleointensity (RPI) variations. RPI records, carried by magnetofossils, which vary by more than a factor of 5-6, are likely to misestimate the extreme values by ~10-15 % due to non-linear effects. This degree of non-linearity, however, is comparable or smaller than measured from redeposition experiments using detrital material, which suggests that over the range of typical geomagnetic field strengths explored here, MTB appear to be good recorders of the paleomagnetic field. The RPI discrepancies between nearby geological records, which have been inferred to be the result of abundant biogenic magnetic minerals, are likely to be related to the mixing of biogenic and detrital magnetic components, or through chemical processes that may subsequently affect the NRM carried by fossil magnetosomes.

  18. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  19. Model Selection with the Linear Mixed Model for Longitudinal Data

    ERIC Educational Resources Information Center

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  20. Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth

    ERIC Educational Resources Information Center

    Jeon, Minjeong

    2012-01-01

    Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…

  1. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Treesearch

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  2. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    ERIC Educational Resources Information Center

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  3. Mixed H∞ and passive control for linear switched systems via hybrid control approach

    NASA Astrophysics Data System (ADS)

    Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin

    2018-03-01

    This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.

  4. Efficacy of a Simple Formulation Composed of Nematode-Trapping Fungi and Bidens pilosa var. radiata Scherff Aqueous Extracts (BPE) for Controlling the Southern Root-Knot Nematode

    PubMed Central

    Ajitomi, Atsushi; Taba, Satoshi; Ajitomi, Yoshino; Kinjo, Misa; Sekine, Ken-taro

    2018-01-01

    We tested a formulation composed of a mixture of Bidens pilosa var. radiata extract (BPE) and nematode-trapping fungi for its effects on Meloidogyne incognita. In earlier evaluations of the effects of plant extracts on the hyphal growth of 5 species of nematode-trapping fungi with different capture organs (traps), the growth of all species was slightly inhibited. However, an investigation on the number of capture organs and nematode-trapping rates revealed that Arthrobotrys dactyloides formed significantly more rings and nematode traps than those of the control. An evaluation of simple mixed formulations prepared using sodium alginate showed that nematodes were captured with all formulations tested. The simple mixed formulation showed a particularly high capture rate. Furthermore, in a pot test, although the effects of a single formulation made from the fungus or plant extract were acceptable, the efficacy of the simple mixed formulation against M. incognita root-knot formation was particularly high. PMID:29311429

  5. A simple smoothness indicator for the WENO scheme with adaptive order

    NASA Astrophysics Data System (ADS)

    Huang, Cong; Chen, Li Li

    2018-01-01

    The fifth order WENO scheme with adaptive order is competent for solving hyperbolic conservation laws, its reconstruction is a convex combination of a fifth order linear reconstruction and three third order linear reconstructions. Note that, on uniform mesh, the computational cost of smoothness indicator for fifth order linear reconstruction is comparable with the sum of ones for three third order linear reconstructions, thus it is too heavy; on non-uniform mesh, the explicit form of smoothness indicator for fifth order linear reconstruction is difficult to be obtained, and its computational cost is much heavier than the one on uniform mesh. In order to overcome these problems, a simple smoothness indicator for fifth order linear reconstruction is proposed in this paper.

  6. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    PubMed

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  7. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  8. Elastic properties and optical absorption studies of mixed alkali borogermanate glasses

    NASA Astrophysics Data System (ADS)

    Taqiullah, S. M.; Ahmmad, Shaik Kareem; Samee, M. A.; Rahman, Syed

    2018-05-01

    First time the mixed alkali effect (MAE) has been investigated in the glass system xNa2O-(30-x)Li2O-40B2O3- 30GeO2 (0≤x≤30 mol%) through density and optical absorption studies. The present glasses were prepared by melt quench technique. The density of the present glasses varies non-linearly exhibiting mixed alkali effect. Using the density data, the elastic moduli namely Young's modulus, bulk and shear modulus show strong linear dependence as a function of compositional parameter. From the absorption edge studies, the values of optical band gap energies for all transitions have been evaluated. It was established that the type of electronic transition in the present glass system is indirect allowed. The indirect optical band gap exhibit non-linear behavior with compositional parameter showing the mixed alkali effect.

  9. Turbulence-assisted shear exfoliation of graphene using household detergent and a kitchen blender

    NASA Astrophysics Data System (ADS)

    Varrla, Eswaraiah; Paton, Keith R.; Backes, Claudia; Harvey, Andrew; Smith, Ronan J.; McCauley, Joe; Coleman, Jonathan N.

    2014-09-01

    To facilitate progression from the lab to commercial applications, it will be necessary to develop simple, scalable methods to produce high quality graphene. Here we demonstrate the production of large quantities of defect-free graphene using a kitchen blender and household detergent. We have characterised the scaling of both graphene concentration and production rate with the mixing parameters: mixing time, initial graphite concentration, rotor speed and liquid volume. We find the production rate to be invariant with mixing time and to increase strongly with mixing volume, results which are important for scale-up. Even in this simple system, concentrations of up to 1 mg ml-1 and graphene masses of >500 mg can be achieved after a few hours mixing. The maximum production rate was ~0.15 g h-1, much higher than for standard sonication-based exfoliation methods. We demonstrate that graphene production occurs because the mean turbulent shear rate in the blender exceeds the critical shear rate for exfoliation.To facilitate progression from the lab to commercial applications, it will be necessary to develop simple, scalable methods to produce high quality graphene. Here we demonstrate the production of large quantities of defect-free graphene using a kitchen blender and household detergent. We have characterised the scaling of both graphene concentration and production rate with the mixing parameters: mixing time, initial graphite concentration, rotor speed and liquid volume. We find the production rate to be invariant with mixing time and to increase strongly with mixing volume, results which are important for scale-up. Even in this simple system, concentrations of up to 1 mg ml-1 and graphene masses of >500 mg can be achieved after a few hours mixing. The maximum production rate was ~0.15 g h-1, much higher than for standard sonication-based exfoliation methods. We demonstrate that graphene production occurs because the mean turbulent shear rate in the blender exceeds the critical shear rate for exfoliation. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr03560g

  10. Optimal clinical trial design based on a dichotomous Markov-chain mixed-effect sleep model.

    PubMed

    Steven Ernest, C; Nyberg, Joakim; Karlsson, Mats O; Hooker, Andrew C

    2014-12-01

    D-optimal designs for discrete-type responses have been derived using generalized linear mixed models, simulation based methods and analytical approximations for computing the fisher information matrix (FIM) of non-linear mixed effect models with homogeneous probabilities over time. In this work, D-optimal designs using an analytical approximation of the FIM for a dichotomous, non-homogeneous, Markov-chain phase advanced sleep non-linear mixed effect model was investigated. The non-linear mixed effect model consisted of transition probabilities of dichotomous sleep data estimated as logistic functions using piecewise linear functions. Theoretical linear and nonlinear dose effects were added to the transition probabilities to modify the probability of being in either sleep stage. D-optimal designs were computed by determining an analytical approximation the FIM for each Markov component (one where the previous state was awake and another where the previous state was asleep). Each Markov component FIM was weighted either equally or by the average probability of response being awake or asleep over the night and summed to derive the total FIM (FIM(total)). The reference designs were placebo, 0.1, 1-, 6-, 10- and 20-mg dosing for a 2- to 6-way crossover study in six dosing groups. Optimized design variables were dose and number of subjects in each dose group. The designs were validated using stochastic simulation/re-estimation (SSE). Contrary to expectations, the predicted parameter uncertainty obtained via FIM(total) was larger than the uncertainty in parameter estimates computed by SSE. Nevertheless, the D-optimal designs decreased the uncertainty of parameter estimates relative to the reference designs. Additionally, the improvement for the D-optimal designs were more pronounced using SSE than predicted via FIM(total). Through the use of an approximate analytic solution and weighting schemes, the FIM(total) for a non-homogeneous, dichotomous Markov-chain phase advanced sleep model was computed and provided more efficient trial designs and increased nonlinear mixed-effects modeling parameter precision.

  11. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  12. Teaching the Concept of Breakdown Point in Simple Linear Regression.

    ERIC Educational Resources Information Center

    Chan, Wai-Sum

    2001-01-01

    Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…

  13. Transit-time and age distributions for nonlinear time-dependent compartmental systems.

    PubMed

    Metzler, Holger; Müller, Markus; Sierra, Carlos A

    2018-02-06

    Many processes in nature are modeled using compartmental systems (reservoir/pool/box systems). Usually, they are expressed as a set of first-order differential equations describing the transfer of matter across a network of compartments. The concepts of age of matter in compartments and the time required for particles to transit the system are important diagnostics of these models with applications to a wide range of scientific questions. Until now, explicit formulas for transit-time and age distributions of nonlinear time-dependent compartmental systems were not available. We compute densities for these types of systems under the assumption of well-mixed compartments. Assuming that a solution of the nonlinear system is available at least numerically, we show how to construct a linear time-dependent system with the same solution trajectory. We demonstrate how to exploit this solution to compute transit-time and age distributions in dependence on given start values and initial age distributions. Furthermore, we derive equations for the time evolution of quantiles and moments of the age distributions. Our results generalize available density formulas for the linear time-independent case and mean-age formulas for the linear time-dependent case. As an example, we apply our formulas to a nonlinear and a linear version of a simple global carbon cycle model driven by a time-dependent input signal which represents fossil fuel additions. We derive time-dependent age distributions for all compartments and calculate the time it takes to remove fossil carbon in a business-as-usual scenario.

  14. Linear analysis of auto-organization in Hebbian neural networks.

    PubMed

    Carlos Letelier, J; Mpodozis, J

    1995-01-01

    The self-organization of neurotopies where neural connections follow Hebbian dynamics is framed in terms of linear operator theory. A general and exact equation describing the time evolution of the overall synaptic strength connecting two neural laminae is derived. This linear matricial equation, which is similar to the equations used to describe oscillating systems in physics, is modified by the introduction of non-linear terms, in order to capture self-organizing (or auto-organizing) processes. The behavior of a simple and small system, that contains a non-linearity that mimics a metabolic constraint, is analyzed by computer simulations. The emergence of a simple "order" (or degree of organization) in this low-dimensionality model system is discussed.

  15. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  16. Alfvén wave interactions in the solar wind

    NASA Astrophysics Data System (ADS)

    Webb, G. M.; McKenzie, J. F.; Hu, Q.; le Roux, J. A.; Zank, G. P.

    2012-11-01

    Alfvén wave mixing (interaction) equations used in locally incompressible turbulence transport equations in the solar wind are analyzed from the perspective of linear wave theory. The connection between the wave mixing equations and non-WKB Alfven wave driven wind theories are delineated. We discuss the physical wave energy equation and the canonical wave energy equation for non-WKB Alfven waves and the WKB limit. Variational principles and conservation laws for the linear wave mixing equations for the Heinemann and Olbert non-WKB wind model are obtained. The connection with wave mixing equations used in locally incompressible turbulence transport in the solar wind are discussed.

  17. Surgery for left ventricular aneurysm: early and late survival after simple linear repair and endoventricular patch plasty.

    PubMed

    Lundblad, Runar; Abdelnoor, Michel; Svennevig, Jan Ludvig

    2004-09-01

    Simple linear resection and endoventricular patch plasty are alternative techniques to repair postinfarction left ventricular aneurysm. The aim of the study was to compare these 2 methods with regard to early mortality and long-term survival. We retrospectively reviewed 159 patients undergoing operations between 1989 and 2003. The epidemiologic design was of an exposed (simple linear repair, n = 74) versus nonexposed (endoventricular patch plasty, n = 85) cohort with 2 endpoints: early mortality and long-term survival. The crude effect of aneurysm repair technique versus endpoint was estimated by odds ratio, rate ratio, or relative risk and their 95% confidence intervals. Stratification analysis by using the Mantel-Haenszel method was done to quantify confounders and pinpoint effect modifiers. Adjustment for multiconfounders was performed by using logistic regression and Cox regression analysis. Survival curves were analyzed with the Breslow test and the log-rank test. Early mortality was 8.2% for all patients, 13.5% after linear repair and 3.5% after endoventricular patch plasty. When adjusted for multiconfounders, the risk of early mortality was significantly higher after simple linear repair than after endoventricular patch plasty (odds ratio, 4.4; 95% confidence interval, 1.1-17.8). Mean follow-up was 5.8 +/- 3.8 years (range, 0-14.0 years). Overall 5-year cumulative survival was 78%, 70.1% after linear repair and 91.4% after endoventricular patch plasty. The risk of total mortality was significantly higher after linear repair than after endoventricular patch plasty when controlled for multiconfounders (relative risk, 4.5; 95% confidence interval, 2.0-9.7). Linear repair dominated early in the series and patch plasty dominated later, giving a possible learning-curve bias in favor of patch plasty that could not be adjusted for in the regression analysis. Postinfarction left ventricular aneurysm can be repaired with satisfactory early and late results. Surgical risk was lower and long-term survival was higher after endoventricular patch plasty than simple linear repair. Differences in outcome should be interpreted with care because of the retrospective study design and the chronology of the 2 repair methods.

  18. Phase mixing versus nonlinear advection in drift-kinetic plasma turbulence

    NASA Astrophysics Data System (ADS)

    Schekochihin, A. A.; Parker, J. T.; Highcock, E. G.; Dellar, P. J.; Dorland, W.; Hammett, G. W.

    2016-04-01

    > A scaling theory of long-wavelength electrostatic turbulence in a magnetised, weakly collisional plasma (e.g. drift-wave turbulence driven by ion temperature gradients) is proposed, with account taken both of the nonlinear advection of the perturbed particle distribution by fluctuating flows and of its phase mixing, which is caused by the streaming of the particles along the mean magnetic field and, in a linear problem, would lead to Landau damping. It is found that it is possible to construct a consistent theory in which very little free energy leaks into high velocity moments of the distribution function, rendering the turbulent cascade in the energetically relevant part of the wavenumber space essentially fluid-like. The velocity-space spectra of free energy expressed in terms of Hermite-moment orders are steep power laws and so the free-energy content of the phase space does not diverge at infinitesimal collisionality (while it does for a linear problem); collisional heating due to long-wavelength perturbations vanishes in this limit (also in contrast with the linear problem, in which it occurs at the finite rate equal to the Landau damping rate). The ability of the free energy to stay in the low velocity moments of the distribution function is facilitated by the `anti-phase-mixing' effect, whose presence in the nonlinear system is due to the stochastic version of the plasma echo (the advecting velocity couples the phase-mixing and anti-phase-mixing perturbations). The partitioning of the wavenumber space between the (energetically dominant) region where this is the case and the region where linear phase mixing wins its competition with nonlinear advection is governed by the `critical balance' between linear and nonlinear time scales (which for high Hermite moments splits into two thresholds, one demarcating the wavenumber region where phase mixing predominates, the other where plasma echo does).

  19. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  20. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    USDA-ARS?s Scientific Manuscript database

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  1. A Re-appraisal of Olivine Sorting and Accumulation in Hawaiian Magmas.

    NASA Astrophysics Data System (ADS)

    Rhodes, J. M.

    2002-12-01

    Bowen never used the m-words (magma mixing) in his highly influential book "The Origin of the Igneous Rocks". Yet, in the past 20-30 years, magma mixing has been proposed as an important, almost ubiquitous, process at volcanoes in all tectonic environments ranging from oceanic basalts to large silicic magma bodies, and as the possible trigger of eruptions. Bowen regarded Hawaiian olivine basalts and picrites as the result of olivine accumulation in a lower MgO magma that was crystallizing and fractionating olivine. This, with variants, has been the party line ever since, the only debate being over the MgO content of the proposed parental magmas. Although magma mixing has been recognized as an important process in differentiated, low-MgO (below 7 percent), Hawaiian magmas, the wide range in MgO (7-30 percent) in Hawaiian olivine tholeiites and picrites is invariably attributed to olivine crystallization, fractionation and accumulation. In this paper I will re-evaluate this hypothesis using well-documented examples from Kilauea, Mauna Kea and Mauna Loa that exhibit well-defined, coherent linear trends of major oxides and trace elements with MgO . If olivine control is the only factor responsible for these trends, then the intersection of the regression lines for each trend should intersect olivine compositions at a common forsterite composition, corresponding to the average accumulated olivine in each of the magmas. In some cases (the ongoing Puu Oo eruption) this simple test holds and olivine fractionation and accumulation can clearly be shown to be the dominant process. In other examples from Mauna Kea and Mauna Loa (1852, 1868, 1950 eruptions, and Mauna Loa in general) the test does not hold, and a more complicated process is required. Additionally, for those magmas that fail the test, CaO/Al2O3 invariably decreases with decreasing MgO content. This should not happen if only olivine fractionation and accumulation are involved. The explanation for these linear trends that approach, but fail to intersect, appropriate olivine compositions is a combination of magma mixing accompanied by olivine crystallization and accumulation. One of the mixing components is a is a high-MgO (about13-15 percent) magma laden with olivine phenocrysts and xenocrysts and the other is a consanguineous low-MgO (about 7 percent) quasi "steady-state" magma, with a prior history of clinopyroxene and plagioclase fractionation.

  2. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  3. Weak limits of powers, simple spectrum of symmetric products, and rank-one mixing constructions

    NASA Astrophysics Data System (ADS)

    Ryzhikov, V. V.

    2007-06-01

    A class of automorphisms of the Lebesgue space such that their symmetric powers have simple spectrum is considered. In the framework of rank-one constructions mixing automorphisms with this property are constructed. The paper also contains results on weak limits, the local rank, and the spectral multiplicity of powers of automorphisms. Spectral properties of the stochastic Chacon automorphism are discussed.Bibliography: 23 titles.

  4. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    PubMed Central

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  5. Crystal chemistry and thermal behavior of La doped (U, Th)O2

    NASA Astrophysics Data System (ADS)

    Keskar, Meera; Shelke, Geeta P.; Shafeeq, Muhammed; Krishnan, K.; Sali, S. K.; Kannan, S.

    2017-12-01

    X-ray diffraction, chemical and thermal studies of [(U0.2Th0.8)1-yLay]O2+x (LUTL) and [(U0.3Th0.7)1-yLay]O2+x (UTL); compounds (where y ≤ 0.4) were carried out. These compounds were synthesized by gel combustion method followed by heating in reduced atmospheres at 1673 K. To co-relate lattice parameters with metal and oxygen concentrations, reduced oxides were heated in Ar, CO2 and air atmospheres. Retention of FCC phase was confirmed in all mixed oxides with y ≤ 0.4. The cubic lattice parameters could be expressed in a linear equation of x and y as: a (Ǻ) = 5.5709 - 0.187 x + 0.032 y; [x < 0 and 0 ≤ y ≤ 0.40] for LUTL and a (Ǻ) = 5.5580 - 0.26 x + 0.015 y; [x < 0 and 0 ≤ y ≤ 0.36] for UTL. Oxidation studies and simple ionic model calculations suggested that uranium is predominantly present as a mixture of +5 and + 6 states when La/U ratio ∼2. Oxidation kinetics of mixed oxides was studied by non-isothermal method using thermogravimetry and was found to be a diffusion controlled reaction. High temperature X-ray diffraction studies of LUTL and UTL mixed oxides showed positive thermal expansion in the temperature range of 298-1273 K and % expansion increases with La concentration.

  6. Simultaneous injection effective mixing flow analysis of urinary albumin using dye-binding reaction.

    PubMed

    Ratanawimarnwong, Nuanlaor; Ponhong, Kraingkrai; Teshima, Norio; Nacapricha, Duangjai; Grudpan, Kate; Sakai, Tadao; Motomizu, Shoji

    2012-07-15

    A new four-channel simultaneous injection effective mixing flow analysis (SIEMA) system has been assembled for the determination of urinary albumin. The SIEMA system consisted of a syringe pump, two 5-way cross connectors, four holding coils, five 3-way solenoid valves, a 50-cm long mixing coil and a spectrophotometer. Tetrabromophenol blue anion (TBPB) in Triton X-100 micelle reacted with albumin at pH 3.2 to form a blue ion complex with a λ(max) 625nm. TBPB, Triton X-100, acetate buffer and albumin standard solutions were aspirated into four individual holding coils by a syringe pump and then the aspirated zones were simultaneously pushed in the reverse direction to the detector flow cell. Baseline drift, due to adsorption of TBPB-albumin complex on the wall of the hydrophobic PTFE tubing, was minimized by aspiration of Triton X-100 and acetate buffer solutions between samples. The calibration graph was linear in the range of 10-50μg/mL and the detection limit for albumin (3σ) was 0.53μg/mL. The RSD (n=11) at 30μg/mL was 1.35%. The sample throughput was 37/h. With a 10-fold dilution, interference from urine matrix was removed. The proposed method has advantages in terms of simple automation operation and short analysis time. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Thermodynamic properties of model CdTe/CdSe mixtures

    DOE PAGES

    van Swol, Frank; Zhou, Xiaowang W.; Challa, Sivakumar R.; ...

    2015-02-20

    We report on the thermodynamic properties of binary compound mixtures of model groups II–VI semiconductors. We use the recently introduced Stillinger–Weber Hamiltonian to model binary mixtures of CdTe and CdSe. We use molecular dynamics simulations to calculate the volume and enthalpy of mixing as a function of mole fraction. The lattice parameter of the mixture closely follows Vegard's law: a linear relation. This implies that the excess volume is a cubic function of mole fraction. A connection is made with hard sphere models of mixed fcc and zincblende structures. We found that the potential energy exhibits a positive deviation frommore » ideal soluton behaviour; the excess enthalpy is nearly independent of temperatures studied (300 and 533 K) and is well described by a simple cubic function of the mole fraction. Using a regular solution approach (combining non-ideal behaviour for the enthalpy with ideal solution behaviour for the entropy of mixing), we arrive at the Gibbs free energy of the mixture. The Gibbs free energy results indicate that the CdTe and CdSe mixtures exhibit phase separation. The upper consolute temperature is found to be 335 K. Finally, we provide the surface energy as a function of composition. Moreover, it roughly follows ideal solution theory, but with a negative deviation (negative excess surface energy). This indicates that alloying increases the stability, even for nano-particles.« less

  8. Mixed hemimicelles solid-phase extraction based on sodium dodecyl sulfate (SDS)-coated nano-magnets for the spectrophotometric determination of Fingolomid in biological fluids

    NASA Astrophysics Data System (ADS)

    Azari, Zhila; Pourbasheer, Eslam; Beheshti, Abolghasem

    2016-01-01

    In this study, mixed hemimicelles solid-phase extraction (SPE) based on sodium dodecyl sulfate (SDS)-coated nano-magnets Fe3O4 was investigated as a novel method for the separation and determination of Fingolimod (FLM) in water, urine and plasma samples prior to spectrophotometeric determination. Due to the high surface area of these new sorbents and the excellent adsorption capacity after surface modification by SDS, satisfactory extraction recoveries can be produced. The main factors affecting the adsolubilization of analysts, such as pH, surfactant and adsorbent amounts, ionic strength, extraction time and desorption conditions were studied and optimized. Under the selected conditions, FLM has been quantitatively extracted. The accuracy of the method was evaluated by recovery measurements on spiked samples, and good recoveries of 96%, 95% and 88% were observed for water, urine and plasma respectively. Proper linear behaviors over the investigated concentration ranges of 2-26, 2-17 and 2-13 mg/L with good coefficients of determination, 0.998, 0.997 and 0.995 were achieved for water, urine and plasma samples, respectively. To the best of our knowledge, this is the first time that a mixed hemimicelles SPE method based on magnetic separation and nanoparticles has been used as a simple and sensitive method for monitoring of FLM in water and biological samples.

  9. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    PubMed

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  10. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    PubMed Central

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  11. Is missing geographic positioning system data in accelerometry studies a problem, and is imputation the solution?

    PubMed Central

    Meseck, Kristin; Jankowska, Marta M.; Schipperijn, Jasper; Natarajan, Loki; Godbole, Suneeta; Carlson, Jordan; Takemoto, Michelle; Crist, Katie; Kerr, Jacqueline

    2016-01-01

    The main purpose of the present study was to assess the impact of global positioning system (GPS) signal lapse on physical activity analyses, discover any existing associations between missing GPS data and environmental and demographics attributes, and to determine whether imputation is an accurate and viable method for correcting GPS data loss. Accelerometer and GPS data of 782 participants from 8 studies were pooled to represent a range of lifestyles and interactions with the built environment. Periods of GPS signal lapse were identified and extracted. Generalised linear mixed models were run with the number of lapses and the length of lapses as outcomes. The signal lapses were imputed using a simple ruleset, and imputation was validated against person-worn camera imagery. A final generalised linear mixed model was used to identify the difference between the amount of GPS minutes pre- and post-imputation for the activity categories of sedentary, light, and moderate-to-vigorous physical activity. Over 17% of the dataset was comprised of GPS data lapses. No strong associations were found between increasing lapse length and number of lapses and the demographic and built environment variables. A significant difference was found between the pre- and post-imputation minutes for each activity category. No demographic or environmental bias was found for length or number of lapses, but imputation of GPS data may make a significant difference for inclusion of physical activity data that occurred during a lapse. Imputing GPS data lapses is a viable technique for returning spatial context to accelerometer data and improving the completeness of the dataset. PMID:27245796

  12. A comparison of bilingual education and generalist teachers' approaches to scientific biliteracy

    NASA Astrophysics Data System (ADS)

    Garza, Esther

    The purpose of this study was to determine if educators were capitalizing on bilingual learners' use of their biliterate abilities to acquire scientific meaning and discourse that would formulate a scientific biliterate identity. Mixed methods were used to explore teachers' use of biliteracy and Funds of Knowledge (Moll, L., Amanti, C., Neff, D., & Gonzalez, N., 1992; Gonzales, Moll, & Amanti, 2005) from the students' Latino heritage while conducting science inquiry. The research study explored four constructs that conceptualized scientific biliteracy. The four constructs include science literacy, science biliteracy, reading comprehension strategies and students' cultural backgrounds. There were 156 4th-5th grade bilingual and general education teachers in South Texas that were surveyed using the Teacher Scientific Biliteracy Inventory (TSBI) and five teachers' science lessons were observed. Qualitative findings revealed that a variety of scientific biliteracy instructional strategies were frequently used in both bilingual and general education classrooms. The language used to deliver this instruction varied. A General Linear Model revealed that classroom assignment, bilingual or general education, had a significant effect on a teacher's instructional approach to employ scientific biliteracy. A simple linear regression found that the TSBI accounted for 17% of the variance on 4th grade reading benchmarks. Mixed methods results indicated that teachers were utilizing scientific biliteracy strategies in English, Spanish and/or both languages. Household items and science experimentation at home were encouraged by teachers to incorporate the students' cultural backgrounds. Finally, science inquiry was conducted through a universal approach to science learning versus a multicultural approach to science learning.

  13. Role of diversity in ICA and IVA: theory and applications

    NASA Astrophysics Data System (ADS)

    Adalı, Tülay

    2016-05-01

    Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the "Unsupervised Learning and ICA Pioneer Award" at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.

  14. A Two-Step Approach for Analysis of Nonignorable Missing Outcomes in Longitudinal Regression: an Application to Upstate KIDS Study.

    PubMed

    Liu, Danping; Yeung, Edwina H; McLain, Alexander C; Xie, Yunlong; Buck Louis, Germaine M; Sundaram, Rajeshwari

    2017-09-01

    Imperfect follow-up in longitudinal studies commonly leads to missing outcome data that can potentially bias the inference when the missingness is nonignorable; that is, the propensity of missingness depends on missing values in the data. In the Upstate KIDS Study, we seek to determine if the missingness of child development outcomes is nonignorable, and how a simple model assuming ignorable missingness would compare with more complicated models for a nonignorable mechanism. To correct for nonignorable missingness, the shared random effects model (SREM) jointly models the outcome and the missing mechanism. However, the computational complexity and lack of software packages has limited its practical applications. This paper proposes a novel two-step approach to handle nonignorable missing outcomes in generalized linear mixed models. We first analyse the missing mechanism with a generalized linear mixed model and predict values of the random effects; then, the outcome model is fitted adjusting for the predicted random effects to account for heterogeneity in the missingness propensity. Extensive simulation studies suggest that the proposed method is a reliable approximation to SREM, with a much faster computation. The nonignorability of missing data in the Upstate KIDS Study is estimated to be mild to moderate, and the analyses using the two-step approach or SREM are similar to the model assuming ignorable missingness. The two-step approach is a computationally straightforward method that can be conducted as sensitivity analyses in longitudinal studies to examine violations to the ignorable missingness assumption and the implications relative to health outcomes. © 2017 John Wiley & Sons Ltd.

  15. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    PubMed

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Fast-SNP: a fast matrix pre-processing algorithm for efficient loopless flux optimization of metabolic models

    PubMed Central

    Saa, Pedro A.; Nielsen, Lars K.

    2016-01-01

    Motivation: Computation of steady-state flux solutions in large metabolic models is routinely performed using flux balance analysis based on a simple LP (Linear Programming) formulation. A minimal requirement for thermodynamic feasibility of the flux solution is the absence of internal loops, which are enforced using ‘loopless constraints’. The resulting loopless flux problem is a substantially harder MILP (Mixed Integer Linear Programming) problem, which is computationally expensive for large metabolic models. Results: We developed a pre-processing algorithm that significantly reduces the size of the original loopless problem into an easier and equivalent MILP problem. The pre-processing step employs a fast matrix sparsification algorithm—Fast- sparse null-space pursuit (SNP)—inspired by recent results on SNP. By finding a reduced feasible ‘loop-law’ matrix subject to known directionalities, Fast-SNP considerably improves the computational efficiency in several metabolic models running different loopless optimization problems. Furthermore, analysis of the topology encoded in the reduced loop matrix enabled identification of key directional constraints for the potential permanent elimination of infeasible loops in the underlying model. Overall, Fast-SNP is an effective and simple algorithm for efficient formulation of loop-law constraints, making loopless flux optimization feasible and numerically tractable at large scale. Availability and Implementation: Source code for MATLAB including examples is freely available for download at http://www.aibn.uq.edu.au/cssb-resources under Software. Optimization uses Gurobi, CPLEX or GLPK (the latter is included with the algorithm). Contact: lars.nielsen@uq.edu.au Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27559155

  17. An Exploratory Study of the Possible Impact of Cerebral Hemisphericity on the Performance of Select Linear, Non-Linear, and Spatial Computer Tasks.

    ERIC Educational Resources Information Center

    McCluskey, James J.

    1997-01-01

    A study of 160 undergraduate journalism students trained to design projects (stacks) using HyperCard on Macintosh computers determined that right-brain dominant subjects outperformed left-brain and mixed-brain dominant subjects, whereas left-brain dominant subjects out performed mixed-brain dominant subjects in several areas. Recommends future…

  18. Organic geochemistry of sediments from the continental margin off southern New England, U.S.A.--Part I. Amino acids, carbohydrates and lignin

    NASA Technical Reports Server (NTRS)

    Steinberg, S. M.; Venkatesan, M. I.; Kaplan, I. R.

    1987-01-01

    Total organic carbon (TOC), lignin, amino acids, sugars and amino sugars were measured in recent sediments for the continental margin off southern New England. The various organic carbon fractions decreased in concentration with increasing distance from shore. The fraction of the TOC that was accounted for by these major components also decreased with increasing distance from shore. The concentration of lignin indicated that only about 3-5% of the organic carbon in the nearshore sediment was of terrestrial origin. The various fractions were highly correlated, which was consistent with a simple linear mixing model of shelf organic matter with material form the slope and rise and indicated a significant transport of sediment from the continental shelf to the continental slope and rise.

  19. Linear mixed-effects modeling approach to FMRI group analysis

    PubMed Central

    Chen, Gang; Saad, Ziad S.; Britton, Jennifer C.; Pine, Daniel S.; Cox, Robert W.

    2013-01-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance–covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance–covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. PMID:23376789

  20. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress drop.

  1. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  2. Linear mixed-effects modeling approach to FMRI group analysis.

    PubMed

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity for activation detection. The importance of hypothesis formulation is also illustrated in the simulations. Comparisons with alternative group analysis approaches and the limitations of LME are discussed in details. Published by Elsevier Inc.

  3. The maximum specific hydrogen-producing activity of anaerobic mixed cultures: definition and determination

    PubMed Central

    Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing

    2014-01-01

    Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system. PMID:24912488

  4. The maximum specific hydrogen-producing activity of anaerobic mixed cultures: definition and determination

    NASA Astrophysics Data System (ADS)

    Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing

    2014-06-01

    Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.

  5. CONVERTING ISOTOPE RATIOS TO DIET COMPOSITION - THE USE OF MIXING MODELS

    EPA Science Inventory

    Investigations of wildlife foraging ecology with stable isotope analysis are increasing. Converting isotope values to proportions of different foods in a consumer's diet requires the use of mixing models. Simple mixing models based on mass balance equations have been used for d...

  6. Effect of Stability on Mixing in Open Canopies. Chapter 4

    NASA Technical Reports Server (NTRS)

    Lee, Young-Hee; Mahrt, L.

    2005-01-01

    In open canopies, the within-canopy flux from the ground surface and understory can account for a significant fraction of the total flux above the canopy. This study incorporates the important influence of within-canopy stability on turbulent mixing and subcanopy fluxes into a first-order closure scheme. Toward this goal, we analyze within-canopy eddy-correlation data from the old aspen site in the Boreal Ecosystem - Atmosphere Study (BOREAS) and a mature ponderosa pine site in Central Oregon, USA. A formulation of within-canopy transport is framed in terms of a stability- dependent mixing length, which approaches Monin-Obukhov similarity theory above the canopy roughness sublayer. The new simple formulation is an improvement upon the usual neglect of the influence of within-canopy stability in simple models. However, frequent well-defined cold air drainage within the pine subcanopy inversion reduces the utility of simple models for nocturnal transport. Other shortcomings of the formulation are discussed.

  7. Action Centered Contextual Bandits.

    PubMed

    Greenewald, Kristjan; Tewari, Ambuj; Klasnja, Predrag; Murphy, Susan

    2017-12-01

    Contextual bandits have become popular as they offer a middle ground between very simple approaches based on multi-armed bandits and very complex approaches using the full power of reinforcement learning. They have demonstrated success in web applications and have a rich body of associated theoretical guarantees. Linear models are well understood theoretically and preferred by practitioners because they are not only easily interpretable but also simple to implement and debug. Furthermore, if the linear model is true, we get very strong performance guarantees. Unfortunately, in emerging applications in mobile health, the time-invariant linear model assumption is untenable. We provide an extension of the linear model for contextual bandits that has two parts: baseline reward and treatment effect. We allow the former to be complex but keep the latter simple. We argue that this model is plausible for mobile health applications. At the same time, it leads to algorithms with strong performance guarantees as in the linear model setting, while still allowing for complex nonlinear baseline modeling. Our theory is supported by experiments on data gathered in a recently concluded mobile health study.

  8. The impact of case mix on timely access to appointments in a primary care group practice.

    PubMed

    Ozen, Asli; Balasubramanian, Hari

    2013-06-01

    At the heart of the practice of primary care is the concept of a physician panel. A panel refers to the set of patients for whose long term, holistic care the physician is responsible. A physician's appointment burden is determined by the size and composition of the panel. Size refers to the number of patients in the panel while composition refers to the case-mix, or the type of patients (older versus younger, healthy versus chronic patients), in the panel. In this paper, we quantify the impact of the size and case-mix on the ability of a multi-provider practice to provide adequate access to its empanelled patients. We use overflow frequency, or the probability that the demand exceeds the capacity, as a measure of access. We formulate problem of minimizing the maximum overflow for a multi-physician practice as a non-linear integer programming problem and establish structural insights that enable us to create simple yet near optimal heuristic strategies to change panels. This optimization framework helps a practice: (1) quantify the imbalances across physicians due to the variation in case mix and panel size, and the resulting effect on access; and (2) determine how panels can be altered in the least disruptive way to improve access. We illustrate our methodology using four test practices created using patient level data from the primary care practice at Mayo Clinic, Rochester, Minnesota. An important advantage of our approach is that it can be implemented in an Excel Spreadsheet and used for aggregate level planning and panel management decisions.

  9. Blind separation of positive sources by globally convergent gradient search.

    PubMed

    Oja, Erkki; Plumbley, Mark

    2004-09-01

    The instantaneous noise-free linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this letter, we consider the task of independent component analysis when the independent sources are known to be nonnegative and well grounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus, this algorithm is guaranteed to find the nonnegative well-grounded independent sources. The analysis is complemented by a numerical simulation, which illustrates the algorithm.

  10. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  11. Automated Processing of Plasma Samples for Lipoprotein Separation by Rate-Zonal Ultracentrifugation.

    PubMed

    Peters, Carl N; Evans, Iain E J

    2016-12-01

    Plasma lipoproteins are the primary means of lipid transport among tissues. Defining alterations in lipid metabolism is critical to our understanding of disease processes. However, lipoprotein measurement is limited to specialized centers. Preparation for ultracentrifugation involves the formation of complex density gradients that is both laborious and subject to handling errors. We created a fully automated device capable of forming the required gradient. The design has been made freely available for download by the authors. It is inexpensive relative to commercial density gradient formers, which generally create linear gradients unsuitable for rate-zonal ultracentrifugation. The design can easily be modified to suit user requirements and any potential future improvements. Evaluation of the device showed reliable peristaltic pump accuracy and precision for fluid delivery. We also demonstrate accurate fluid layering with reduced mixing at the gradient layers when compared to usual practice by experienced laboratory personnel. Reduction in layer mixing is of critical importance, as it is crucial for reliable lipoprotein separation. The automated device significantly reduces laboratory staff input and reduces the likelihood of error. Overall, this device creates a simple and effective solution to formation of complex density gradients. © 2015 Society for Laboratory Automation and Screening.

  12. Knowledge evolution in physics research: An analysis of bibliographic coupling networks.

    PubMed

    Liu, Wenyuan; Nanetti, Andrea; Cheong, Siew Ann

    2017-01-01

    Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the American Physical Society (APS) publications data sets, we ask in this paper how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on Bose-Einstein condensation (BEC), quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events.

  13. Kinetics of DSB rejoining and formation of simple chromosome exchange aberrations

    NASA Technical Reports Server (NTRS)

    Cucinotta, F. A.; Nikjoo, H.; O'Neill, P.; Goodhead, D. T.

    2000-01-01

    PURPOSE: To investigate the role of kinetics in the processing of DNA double strand breaks (DSB), and the formation of simple chromosome exchange aberrations following X-ray exposures to mammalian cells based on an enzymatic approach. METHODS: Using computer simulations based on a biochemical approach, rate-equations that describe the processing of DSB through the formation of a DNA-enzyme complex were formulated. A second model that allows for competition between two processing pathways was also formulated. The formation of simple exchange aberrations was modelled as misrepair during the recombination of single DSB with undamaged DNA. Non-linear coupled differential equations corresponding to biochemical pathways were solved numerically by fitting to experimental data. RESULTS: When mediated by a DSB repair enzyme complex, the processing of single DSB showed a complex behaviour that gives the appearance of fast and slow components of rejoining. This is due to the time-delay caused by the action time of enzymes in biomolecular reactions. It is shown that the kinetic- and dose-responses of simple chromosome exchange aberrations are well described by a recombination model of DSB interacting with undamaged DNA when aberration formation increases with linear dose-dependence. Competition between two or more recombination processes is shown to lead to the formation of simple exchange aberrations with a dose-dependence similar to that of a linear quadratic model. CONCLUSIONS: Using a minimal number of assumptions, the kinetics and dose response observed experimentally for DSB rejoining and the formation of simple chromosome exchange aberrations are shown to be consistent with kinetic models based on enzymatic reaction approaches. A non-linear dose response for simple exchange aberrations is possible in a model of recombination of DNA containing a DSB with undamaged DNA when two or more pathways compete for DSB repair.

  14. Continuous relaxation and retardation spectrum method for viscoelastic characterization of asphalt concrete

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudip; Swamy, Aravind Krishna; Daniel, Jo S.

    2012-08-01

    This paper presents a simple and practical approach to obtain the continuous relaxation and retardation spectra of asphalt concrete directly from the complex (dynamic) modulus test data. The spectra thus obtained are continuous functions of relaxation and retardation time. The major advantage of this method is that the continuous form is directly obtained from the master curves which are readily available from the standard characterization tests of linearly viscoelastic behavior of asphalt concrete. The continuous spectrum method offers efficient alternative to the numerical computation of discrete spectra and can be easily used for modeling viscoelastic behavior. In this research, asphalt concrete specimens have been tested for linearly viscoelastic characterization. The linearly viscoelastic test data have been used to develop storage modulus and storage compliance master curves. The continuous spectra are obtained from the fitted sigmoid function of the master curves via the inverse integral transform. The continuous spectra are shown to be the limiting case of the discrete distributions. The continuous spectra and the time-domain viscoelastic functions (relaxation modulus and creep compliance) computed from the spectra matched very well with the approximate solutions. It is observed that the shape of the spectra is dependent on the master curve parameters. The continuous spectra thus obtained can easily be implemented in material mix design process. Prony-series coefficients can be easily obtained from the continuous spectra and used in numerical analysis such as finite element analysis.

  15. A case-mix classification system for explaining healthcare costs using administrative data in Italy.

    PubMed

    Corti, Maria Chiara; Avossa, Francesco; Schievano, Elena; Gallina, Pietro; Ferroni, Eliana; Alba, Natalia; Dotto, Matilde; Basso, Cristina; Netti, Silvia Tiozzo; Fedeli, Ugo; Mantoan, Domenico

    2018-03-04

    The Italian National Health Service (NHS) provides universal coverage to all citizens, granting primary and hospital care with a copayment system for outpatient and drug services. Financing of Local Health Trusts (LHTs) is based on a capitation system adjusted only for age, gender and area of residence. We applied a risk-adjustment system (Johns Hopkins Adjusted Clinical Groups System, ACG® System) in order to explain health care costs using routinely collected administrative data in the Veneto Region (North-eastern Italy). All residents in the Veneto Region were included in the study. The ACG system was applied to classify the regional population based on the following information sources for the year 2015: Hospital Discharges, Emergency Room visits, Chronic disease registry for copayment exemptions, ambulatory visits, medications, the Home care database, and drug prescriptions. Simple linear regressions were used to contrast an age-gender model to models incorporating more comprehensive risk measures aimed at predicting health care costs. A simple age-gender model explained only 8% of the variance of 2015 total costs. Adding diagnoses-related variables provided a 23% increase, while pharmacy based variables provided an additional 17% increase in explained variance. The adjusted R-squared of the comprehensive model was 6 times that of the simple age-gender model. ACG System provides substantial improvement in predicting health care costs when compared to simple age-gender adjustments. Aging itself is not the main determinant of the increase of health care costs, which is better explained by the accumulation of chronic conditions and the resulting multimorbidity. Copyright © 2018. Published by Elsevier B.V.

  16. High-accuracy self-mixing interferometer based on multiple reflections using a simple external reflecting mirror

    NASA Astrophysics Data System (ADS)

    Wang, Xiu-lin; Wei, Zheng; Wang, Rui; Huang, Wen-cai

    2018-05-01

    A self-mixing interferometer (SMI) with resolution twenty times higher than that of a conventional interferometer is developed by multiple reflections. Only by employing a simple external reflecting mirror, the multiple-pass optical configuration can be constructed. The advantage of the configuration is simple and easy to make the light re-injected back into the laser cavity. Theoretical analysis shows that the resolution of measurement is scalable by adjusting the number of reflections. The experiment shows that the proposed method has the optical resolution of approximate λ/40. The influence of displacement sensitivity gain ( G) is further analyzed and discussed in practical experiments.

  17. Extending the range of real time density matrix renormalization group simulations

    NASA Astrophysics Data System (ADS)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  18. Validated HPLC-UV method for determination of naproxen in human plasma with proven selectivity against ibuprofen and paracetamol.

    PubMed

    Filist, Monika; Szlaska, Iwona; Kaza, Michał; Pawiński, Tomasz

    2016-06-01

    Estimating the influence of interfering compounds present in the biological matrix on the determination of an analyte is one of the most important tasks during bioanalytical method development and validation. Interferences from endogenous components and, if necessary, from major metabolites as well as possible co-administered medications should be evaluated during a selectivity test. This paper describes a simple, rapid and cost-effective HPLC-UV method for the determination of naproxen in human plasma in the presence of two other analgesics, ibuprofen and paracetamol. Sample preparation is based on a simple liquid-liquid extraction procedure with a short, 5 s mixing time. Fenoprofen, which is characterized by a similar structure and properties to naproxen, was first used as the internal standard. The calibration curve is linear in the concentration range of 0.5-80.0 µg/mL, which is suitable for pharmacokinetic studies following a single 220 mg oral dose of naproxen sodium. The method was fully validated according to international guidelines and was successfully applied in a bioequivalence study in humans. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Evaluating Treatment and Generalization Patterns of Two Theoretically Motivated Sentence Comprehension Therapies.

    PubMed

    Des Roches, Carrie A; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David; Kiran, Swathi

    2016-12-01

    The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type.

  20. Low frequency vibration induced streaming in a Hele-Shaw cell

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costalonga, M., E-mail: maxime.costalonga@univ-paris-diderot.fr; Laboratoire Matière et Systèmes Complexes, UMR CNRS 7057, Université Paris Diderot, 10 rue Alice Domon et Léonie Duquet, 75205 Paris cedex 13; Brunet, P.

    When an acoustic wave propagates in a fluid, it can generate a second order flow whose characteristic time is much longer than the period of the wave. Within a range of frequency between ten and several hundred Hz, a relatively simple and versatile way to generate streaming flow is to put a vibrating object in the fluid. The flow develops vortices in the viscous boundary layer located in the vicinity of the source of vibrations, leading in turn to an outer irrotational streaming called Rayleigh streaming. Because the flow originates from non-linear time-irreversible terms of the Navier-Stokes equation, this phenomenonmore » can be used to generate efficient mixing at low Reynolds number, for instance in confined geometries. Here, we report on an experimental study of such streaming flow induced by a vibrating beam in a Hele-Shaw cell of 2 mm span using long exposure flow visualization and particle-image velocimetry measurements. Our study focuses especially on the effects of forcing frequency and amplitude on flow dynamics. It is shown that some features of this flow can be predicted by simple scaling arguments and that this vibration-induced streaming facilitates the generation of vortices.« less

  1. Combinations of Aromatic and Aliphatic Radiolysis.

    PubMed

    LaVerne, Jay A; Dowling-Medley, Jennifer

    2015-10-08

    The production of H(2) in the radiolysis of benzene, methylbenzene (toluene), ethylbenzene, butylbenzene, and hexylbenzene with γ-rays, 2-10 MeV protons, 5-20 MeV helium ions, and 10-30 MeV carbon ions is used as a probe of the overall radiation sensitivity and to determine the relative contributions of aromatic and aliphatic entities in mixed hydrocarbons. The addition of an aliphatic side chain with progressively from one to six carbon lengths to benzene increases the H(2) yield with γ-rays, but the yield seems to reach a plateau far below that found from a simple aliphatic such as cyclohexane. There is a large increase in H(2) with LET (linear energy transfer) for all of the substituted benzenes, which indicates that the main process for H(2) formation is a second-order process and dominated by the aromatic entity. The addition of a small amount of benzene to cyclohexane can lower the H(2) yield from the value expected from a simple mixture law. A 50:50% volume mixture of benzene-cyclohexane has essentially the same H(2) yield as cyclohexylbenzene at a wide variation in LET, suggesting that intermolecular energy transfer is as efficient as intramolecular energy transfer.

  2. REE in the Great Whale River estuary, northwest Quebec

    NASA Technical Reports Server (NTRS)

    Goldstein, Steven J.; Jacobsen, Stein B.

    1988-01-01

    A report on REE concentrations within the estuary of the Great Whale River in northwest Quebec and in Hudson Bay is given, showing concentrations which are less than those predicted by conservative mixing of seawater and river water, indicating removal of REE from solution. REE removal is rapid, occurring primarily at salinities less than 2 percent and ranges from about 70 percent for light REE to no more than 40 percent for heavy REE. At low salinity, Fe removal is essentially complete. The shape of Fe and REE vs. salinity profiles is not consistent with a simple model of destabilization and coagulation of Fe and REE-bearing colloidal material. A linear relationship between the activity of free ion REE(3+) and pH is consistent with a simple ion-exchange model for REE removal. Surface and subsurface samples of Hudson Bay seawater show high REE and La/Yb concentrations relative to average seawater, with the subsurface sample having a Nd concentration of 100 pmol/kg and an epsilon(Nd) of -29.3; characteristics consistent with river inputs of Hudson Bay. This indicates that rivers draining the Canadian Shield are a major source of nonradiogenic Nd and REE to the Atlantic Ocean.

  3. Hexa Helix: Modified Quad Helix Appliance to Correct Anterior and Posterior Crossbites in Mixed Dentition

    PubMed Central

    Yaseen, Syed Mohammed; Acharya, Ravindranath

    2012-01-01

    Among the commonly encountered dental irregularities which constitute developing malocclusion is the crossbite. During primary and mixed dentition phase, the crossbite is seen very often and if left untreated during these phases then a simple problem may be transformed into a more complex problem. Different techniques have been used to correct anterior and posterior crossbites in mixed dentition. This case report describes the use of hexa helix, a modified version of quad helix for the management of anterior crossbite and bilateral posterior crossbite in early mixed dentition. Correction was achieved within 15 weeks with no damage to the tooth or the marginal periodontal tissue. The procedure is a simple and effective method for treating anterior and bilateral posterior crossbites simultaneously. PMID:23119188

  4. Simple quasi-analytical holonomic homogenization model for the non-linear analysis of in-plane loaded masonry panels: Part 1, meso-scale

    NASA Astrophysics Data System (ADS)

    Milani, G.; Bertolesi, E.

    2017-07-01

    A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.

  5. Study on the Spectral Mixing Model for Mineral Pigments Based on Derivative of Ratio Spectroscopy-Take Vermilion and Stone Yellow for Example

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Hao, Y.; Liu, X.; Hou, M.; Zhao, X.

    2018-04-01

    Hyperspectral remote sensing is a completely non-invasive technology for measurement of cultural relics, and has been successfully applied in identification and analysis of pigments of Chinese historical paintings. Although the phenomenon of mixing pigments is very usual in Chinese historical paintings, the quantitative analysis of the mixing pigments in the ancient paintings is still unsolved. In this research, we took two typical mineral pigments, vermilion and stone yellow as example, made precisely mixed samples using these two kinds of pigments, and measured their spectra in the laboratory. For the mixing spectra, both fully constrained least square (FCLS) method and derivative of ratio spectroscopy (DRS) were performed. Experimental results showed that the mixing spectra of vermilion and stone yellow had strong nonlinear mixing characteristics, but at some bands linear unmixing could also achieve satisfactory results. DRS using strong linear bands can reach much higher accuracy than that of FCLS using full bands.

  6. Linear and Non-Linear Visual Feature Learning in Rat and Humans

    PubMed Central

    Bossens, Christophe; Op de Beeck, Hans P.

    2016-01-01

    The visual system processes visual input in a hierarchical manner in order to extract relevant features that can be used in tasks such as invariant object recognition. Although typically investigated in primates, recent work has shown that rats can be trained in a variety of visual object and shape recognition tasks. These studies did not pinpoint the complexity of the features used by these animals. Many tasks might be solved by using a combination of relatively simple features which tend to be correlated. Alternatively, rats might extract complex features or feature combinations which are nonlinear with respect to those simple features. In the present study, we address this question by starting from a small stimulus set for which one stimulus-response mapping involves a simple linear feature to solve the task while another mapping needs a well-defined nonlinear combination of simpler features related to shape symmetry. We verified computationally that the nonlinear task cannot be trivially solved by a simple V1-model. We show how rats are able to solve the linear feature task but are unable to acquire the nonlinear feature. In contrast, humans are able to use the nonlinear feature and are even faster in uncovering this solution as compared to the linear feature. The implications for the computational capabilities of the rat visual system are discussed. PMID:28066201

  7. Approximating a nonlinear advanced-delayed equation from acoustics

    NASA Astrophysics Data System (ADS)

    Teodoro, M. Filomena

    2016-10-01

    We approximate the solution of a particular non-linear mixed type functional differential equation from physiology, the mucosal wave model of the vocal oscillation during phonation. The mathematical equation models a superficial wave propagating through the tissues. The numerical scheme is adapted from the work presented in [1, 2, 3], using homotopy analysis method (HAM) to solve the non linear mixed type equation under study.

  8. Incorporation of diet information derived from Bayesian stable isotope mixing models into mass-balanced marine ecosystem models: A case study from the Marennes-Oleron Estuary, France

    EPA Science Inventory

    We investigated the use of output from Bayesian stable isotope mixing models as constraints for a linear inverse food web model of a temperate intertidal seagrass system in the Marennes-Oléron Bay, France. Linear inverse modeling (LIM) is a technique that estimates a complete net...

  9. Improving the Power of GWAS and Avoiding Confounding from Population Stratification with PC-Select

    PubMed Central

    Tucker, George; Price, Alkes L.; Berger, Bonnie

    2014-01-01

    Using a reduced subset of SNPs in a linear mixed model can improve power for genome-wide association studies, yet this can result in insufficient correction for population stratification. We propose a hybrid approach using principal components that does not inflate statistics in the presence of population stratification and improves power over standard linear mixed models. PMID:24788602

  10. Determination of perfluorinated compounds in fish fillet homogenates: method validation and application to fillet homogenates from the Mississippi River.

    PubMed

    Malinsky, Michelle Duval; Jacoby, Cliffton B; Reagen, William K

    2011-01-10

    We report herein a simple protein precipitation extraction-liquid chromatography tandem mass spectrometry (LC/MS/MS) method, validation, and application for the analysis of perfluorinated carboxylic acids (C7-C12), perfluorinated sulfonic acids (C4, C6, and C8), and perfluorooctane sulfonamide (FOSA) in fish fillet tissue. The method combines a rapid homogenization and protein precipitation tissue extraction procedure using stable-isotope internal standard (IS) calibration. Method validation in bluegill (Lepomis macrochirus) fillet tissue evaluated the following: (1) method accuracy and precision in both extracted matrix-matched calibration and solvent (unextracted) calibration, (2) quantitation of mixed branched and linear isomers of perfluorooctanoate (PFOA) and perfluorooctanesulfonate (PFOS) with linear isomer calibration, (3) quantitation of low level (ppb) perfluorinated compounds (PFCs) in the presence of high level (ppm) PFOS, and (4) specificity from matrix interferences. Both calibration techniques produced method accuracy of at least 100±13% with a precision (%RSD) ≤18% for all target analytes. Method accuracy and precision results for fillet samples from nine different fish species taken from the Mississippi River in 2008 and 2009 are also presented. Copyright © 2010 Elsevier B.V. All rights reserved.

  11. High-harmonic generation by two-color mixing of circularly polarized laser fields

    NASA Astrophysics Data System (ADS)

    Milošević, D. B.; Becker, W.; Kopold, R.

    2000-06-01

    Dipole selection rules prevent harmonic generation by an atom in a circularly polarized laser field. However, this is not the case for a superposition of several circularly polarized fields, such as two circularly polarized fields with frequencies ω and 2ω that corotate or counter-rotate in the same plane. Harmonic generation in this environment has been observed and, in fact, found to be very intense in the counter-rotating case [1]. In a certain frequency region, the harmonics may be stronger than those radiated in a linearly polarized field of either frequency. The selection rules dictate that the harmonics are circularly polarized with a helicity that alternates from one harmonic to the next. Besides their practical interest, these harmonics are also intriguing from a fundamental point of view: the standard simple-man picture does not apply since orbits that start with zero velocity in this field almost never return to their point of departure. In terms of quantum trajectories, we discuss the mechanism that generates these harmonics. In several interesting ways, it is complementary to the case of linear polarization. [1] H. Eichmann et al., Phys. Rev. A 51, R3414 (1995)

  12. Multi-Mode Analysis of Dual Ridged Waveguide Systems for Material Characterization

    DTIC Science & Technology

    2015-09-17

    characterization is the process of determining the dielectric, magnetic, and magnetoelectric properties of a material. For simple (i.e., linear ...field expressions in terms of elementary functions (sines, cosines, exponentials and Bessel functions) and corresponding propagation constants of the...with material parameters 0 and µ0. • The MUT is simple ( linear , isotropic, homogeneous), and the sample has a uniform thickness. • The waveguide

  13. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    PubMed

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

  14. On summary measure analysis of linear trend repeated measures data: performance comparison with two competing methods.

    PubMed

    Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh

    2012-03-22

    The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.

  15. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  16. Ternary mixed crystal effects on interface optical phonon and electron-phonon coupling in zinc-blende GaN/AlxGa1-xN spherical quantum dots

    NASA Astrophysics Data System (ADS)

    Huang, Wen Deng; Chen, Guang De; Yuan, Zhao Lin; Yang, Chuang Hua; Ye, Hong Gang; Wu, Ye Long

    2016-02-01

    The theoretical investigations of the interface optical phonons, electron-phonon couplings and its ternary mixed effects in zinc-blende spherical quantum dots are obtained by using the dielectric continuum model and modified random-element isodisplacement model. The features of dispersion curves, electron-phonon coupling strengths, and its ternary mixed effects for interface optical phonons in a single zinc-blende GaN/AlxGa1-xN spherical quantum dot are calculated and discussed in detail. The numerical results show that there are three branches of interface optical phonons. One branch exists in low frequency region; another two branches exist in high frequency region. The interface optical phonons with small quantum number l have more important contributions to the electron-phonon interactions. It is also found that ternary mixed effects have important influences on the interface optical phonon properties in a single zinc-blende GaN/AlxGa1-xN quantum dot. With the increase of Al component, the interface optical phonon frequencies appear linear changes, and the electron-phonon coupling strengths appear non-linear changes in high frequency region. But in low frequency region, the frequencies appear non-linear changes, and the electron-phonon coupling strengths appear linear changes.

  17. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    PubMed

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  18. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    PubMed

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  19. A simplified design of the staggered herringbone micromixer for practical applications

    PubMed Central

    Du, Yan; Zhang, Zhiyi; Yim, ChaeHo; Lin, Min; Cao, Xudong

    2010-01-01

    We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length Lm as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since Lm is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications. PMID:20697584

  20. A simplified design of the staggered herringbone micromixer for practical applications.

    PubMed

    Du, Yan; Zhang, Zhiyi; Yim, Chaeho; Lin, Min; Cao, Xudong

    2010-05-07

    We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length L(m) as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since L(m) is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications.

  1. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  2. High quality diabetes care: testing the effectiveness of strategies of regional implementation teams.

    PubMed

    Drach-Zahavy, Anat; Shadmi, Efrat; Freund, Anat; Goldfracht, Margalit

    2009-01-01

    The purpose of this article is to identify and test the effectiveness of work strategies employed by regional implementation teams to attain high quality care for diabetes patients. The study was conducted in a major health maintenance organization (HMO) that provides care for 70 per cent of Israel's diabetes patients. A sequential mixed model design, combining qualitative and quantitative methods was employed. In-depth interviews were conducted with members of six regional implementation teams, each responsible for the care of 25,000-34,000 diabetic patients. Content analysis of the interviews revealed that teams employed four key strategies: task-interdependence, goal-interdependence, reliance on top-down standardised processes and team-learning. These strategies were used to predict the mean percentage performance of eight evidence-based indicators of diabetes care: percentage of patients with HbA1c < 7 per cent, blood pressure < or = 130/80 and cholesterol < or = 100; and performance of: HbA1c tests, LDL cholesterol tests, blood pressure measurements, urine protein tests, and ophthalmic examinations. Teams were found to vary in their use of the four strategies. Mixed linear models analysis indicated that type of indicator (simple process, compound process, and outcome) and goal interdependence were significantly linked to team effectiveness. For simple-process indicators, reliance on top-down standardised processes led to team effectiveness, but for outcome measures this strategy was ineffective, and even counter-effective. For outcome measures, team-learning was more beneficial. The findings have implications for the management of chronic diseases. The advantage of allowing team members flexibility in the choice of the best work strategy to attain high quality diabetes care is attested.

  3. A simple and fast method based on mixed hemimicelles coated magnetite nanoparticles for simultaneous extraction of acidic and basic pollutants.

    PubMed

    Asgharinezhad, Ali Akbar; Ebrahimzadeh, Homeira

    2016-01-01

    One of the considerable and disputable areas in analytical chemistry is a single-step simultaneous extraction of acidic and basic pollutants. In this research, a simple and fast coextraction of acidic and basic pollutants (with different polarities) with the aid of magnetic dispersive micro-solid phase extraction based on mixed hemimicelles assembly was introduced for the first time. Cetyltrimethylammonium bromide (CTAB)-coated Fe3O4 nanoparticles as an efficient sorbent was successfully applied to adsorb 4-nitrophenol and 4-chlorophenol as two acidic and chlorinated aromatic amines as basic model compounds. Using a central composite design methodology combined with desirability function approach, the optimal experimental conditions were evaluated. The opted conditions were pH = 10; concentration of CTAB = 0.86 mmol L(-1); sorbent amount = 55.5 mg; sorption time = 11.0 min; no salt addition to the sample, type, and volume of the eluent = 120 μL methanol containing 5% acetic acid and 0.01 mol L(-1) HCl; and elution time = 1.0 min. Under the optimum conditions, detection limits and linear dynamic ranges were achieved in the range of 0.05-0.1 and 0.25-500 μg L(-1), respectively. The percent of extraction recoveries and relative standard deviations (n = 5) were in the range of 71.4-98.0 and 4.5-6.5, respectively. The performance of the optimized method was certified by coextraction of other acidic and basic compounds. Ultimately, the applicability of the method was successfully confirmed by the extraction and determination of the target analytes in various water samples, and satisfactory results were obtained.

  4. Morphological differences in skeletal muscle atrophy of rats with motor nerve and/or sensory nerve injury★

    PubMed Central

    Zhao, Lei; Lv, Guangming; Jiang, Shengyang; Yan, Zhiqiang; Sun, Junming; Wang, Ling; Jiang, Donglin

    2012-01-01

    Skeletal muscle atrophy occurs after denervation. The present study dissected the rat left ventral root and dorsal root at L4-6 or the sciatic nerve to establish a model of simple motor nerve injury, sensory nerve injury or mixed nerve injury. Results showed that with prolonged denervation time, rats with simple motor nerve injury, sensory nerve injury or mixed nerve injury exhibited abnormal behavior, reduced wet weight of the left gastrocnemius muscle, decreased diameter and cross-sectional area and altered ultrastructure of muscle cells, as well as decreased cross-sectional area and increased gray scale of the gastrocnemius muscle motor end plate. Moreover, at the same time point, the pathological changes were most severe in mixed nerve injury, followed by simple motor nerve injury, and the changes in simple sensory nerve injury were the mildest. These findings indicate that normal skeletal muscle morphology is maintained by intact innervation. Motor nerve injury resulted in larger damage to skeletal muscle and more severe atrophy than sensory nerve injury. Thus, reconstruction of motor nerves should be considered first in the clinical treatment of skeletal muscle atrophy caused by denervation. PMID:25337102

  5. Incorporating Active Runway Crossings in Airport Departure Scheduling

    NASA Technical Reports Server (NTRS)

    Gupta, Gautam; Malik, Waqar; Jung, Yoon C.

    2010-01-01

    A mixed integer linear program is presented for deterministically scheduling departure and ar rival aircraft at airport runways. This method addresses different schemes of managing the departure queuing area by treating it as first-in-first-out queues or as a simple par king area where any available aircraft can take-off ir respective of its relative sequence with others. In addition, this method explicitly considers separation criteria between successive aircraft and also incorporates an optional prioritization scheme using time windows. Multiple objectives pertaining to throughput and system delay are used independently. Results indicate improvement over a basic first-come-first-serve rule in both system delay and throughput. Minimizing system delay results in small deviations from optimal throughput, whereas minimizing throughput results in large deviations in system delay. Enhancements for computational efficiency are also presented in the form of reformulating certain constraints and defining additional inequalities for better bounds.

  6. Scattering of 42-MeV alpha particles from Cu-65

    NASA Technical Reports Server (NTRS)

    Stewart, W. M.; Seth, K. K.

    1972-01-01

    The extended particle-core coupling model was used to predict the properties of low-lying levels of Cu-65. A 42-MeV alpha particle cyclotron beam was used for the experiment. The experiment included magnetic analysis of the incident beam and particle detection by lithium-drifted silicon semiconductors. Angular distributions were measured for 10 to 50 degrees in the center of mass system. Data was reduced by fitting the peaks with a skewed Gaussian function using a least squares computer program with a linear background search. The energy calibration of each system was done by pulsar, and the excitation energies are accurate to + or - 25 keV. The simple weak coupling model cannot account for the experimentally observed quantities of the low-lying levels of Cu-65. The extended particle-core calculation showed that the coupling is not weak and that considerable configuration mixing of the low-lying states results.

  7. Finite-time synchronization of stochastic coupled neural networks subject to Markovian switching and input saturation.

    PubMed

    Selvaraj, P; Sakthivel, R; Kwon, O M

    2018-06-07

    This paper addresses the problem of finite-time synchronization of stochastic coupled neural networks (SCNNs) subject to Markovian switching, mixed time delay, and actuator saturation. In addition, coupling strengths of the SCNNs are characterized by mutually independent random variables. By utilizing a simple linear transformation, the problem of stochastic finite-time synchronization of SCNNs is converted into a mean-square finite-time stabilization problem of an error system. By choosing a suitable mode dependent switched Lyapunov-Krasovskii functional, a new set of sufficient conditions is derived to guarantee the finite-time stability of the error system. Subsequently, with the help of anti-windup control scheme, the actuator saturation risks could be mitigated. Moreover, the derived conditions help to optimize estimation of the domain of attraction by enlarging the contractively invariant set. Furthermore, simulations are conducted to exhibit the efficiency of proposed control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Aza-heterocyclic Receptors for Direct Electron Transfer Hemoglobin Biosensor

    NASA Astrophysics Data System (ADS)

    Kumar, Vinay; Kashyap, D. M. Nikhila; Hebbar, Suraj; Swetha, R.; Prasad, Sujay; Kamala, T.; Srikanta, S. S.; Krishnaswamy, P. R.; Bhat, Navakanta

    2017-02-01

    Direct Electron Transfer biosensors, facilitating direct communication between the biomolecule of interest and electrode surface, are preferable compared to enzymatic and mediator based sensors. Although hemoglobin (Hb) contains four redox active iron centres, direct detection is not possible due to inaccessibility of iron centres and formation of dimers, blocking electron transfer. Through the coordination of iron with aza-heterocyclic receptors - pyridine and imidazole - we report a cost effective, highly sensitive and simple electrochemical Hb sensor using cyclic voltammetry and chronoamperometry. The receptor can be either in the form of liquid micro-droplet mixed with blood or dry chemistry embedded in paper membrane on top of screen printed carbon electrodes. We demonstrate excellent linearity and robustness against interference using clinical samples. A truly point of care technology is demonstrated by integrating disposable test strips with handheld reader, enabling finger prick to result in less than a minute.

  9. On the volume-dependence of the index of refraction from the viewpoint of the complex dielectric function and the Kramers-Kronig relation.

    PubMed

    Rocquefelte, Xavier; Jobic, Stéphane; Whangbo, Myung-Hwan

    2006-02-16

    How indices of refraction n(omega) of insulating solids are affected by the volume dilution of an optical entity and the mixing of different, noninteracting simple solid components was examined on the basis of the dielectric function epsilon(1)(omega) + iepsilon(2)(omega). For closely related insulating solids with an identical composition and the formula unit volume V, the relation [epsilon(1)(omega) - 1]V = constant was found by combining the relation epsilon(2)(omega)V = constant with the Kramers-Kronig relation. This relation becomes [n(2)(omega) - 1]V = constant for the index of refraction n(omega) determined for the incident light with energy less than the band gap (i.e., h omega < E(g)). For a narrow range of change in the formula unit volume, the latter relation is well approximated by a linear relation between n and 1/V.

  10. Mutation-selection equilibrium in games with multiple strategies.

    PubMed

    Antal, Tibor; Traulsen, Arne; Ohtsuki, Hisashi; Tarnita, Corina E; Nowak, Martin A

    2009-06-21

    In evolutionary games the fitness of individuals is not constant but depends on the relative abundance of the various strategies in the population. Here we study general games among n strategies in populations of large but finite size. We explore stochastic evolutionary dynamics under weak selection, but for any mutation rate. We analyze the frequency dependent Moran process in well-mixed populations, but almost identical results are found for the Wright-Fisher and Pairwise Comparison processes. Surprisingly simple conditions specify whether a strategy is more abundant on average than 1/n, or than another strategy, in the mutation-selection equilibrium. We find one condition that holds for low mutation rate and another condition that holds for high mutation rate. A linear combination of these two conditions holds for any mutation rate. Our results allow a complete characterization of nxn games in the limit of weak selection.

  11. Modulation of additive and interactive effects in lexical decision by trial history.

    PubMed

    Masson, Michael E J; Kliegl, Reinhold

    2013-05-01

    Additive and interactive effects of word frequency, stimulus quality, and semantic priming have been used to test theoretical claims about the cognitive architecture of word-reading processes. Additive effects among these factors have been taken as evidence for discrete-stage models of word reading. We present evidence from linear mixed-model analyses applied to 2 lexical decision experiments indicating that apparent additive effects can be the product of aggregating over- and underadditive interaction effects that are modulated by recent trial history, particularly the lexical status and stimulus quality of the previous trial's target. Even a simple practice effect expressed as improved response speed across trials was powerfully modulated by the nature of the previous target item. These results suggest that additivity and interaction between factors may reflect trial-to-trial variation in stimulus representations and decision processes rather than fundamental differences in processing architecture.

  12. A kinetic method for the determination of thiourea by its catalytic effect in micellar media

    NASA Astrophysics Data System (ADS)

    Abbasi, Shahryar; Khani, Hossein; Gholivand, Mohammad Bagher; Naghipour, Ali; Farmany, Abbas; Abbasi, Freshteh

    2009-03-01

    A highly sensitive, selective and simple kinetic method was developed for the determination of trace levels of thiourea based on its catalytic effect on the oxidation of janus green in phosphoric acid media and presence of Triton X-100 surfactant without any separation and pre-concentration steps. The reaction was monitored spectrophotometrically by tracing the formation of the green-colored oxidized product of janus green at 617 nm within 15 min of mixing the reagents. The effect of some factors on the reaction speed was investigated. Following the recommended procedure, thiourea could be determined with linear calibration graph in 0.03-10.00 μg/ml range. The detection limit of the proposed method is 0.02 μg/ml. Most of foreign species do not interfere with the determination. The high sensitivity and selectivity of the proposed method allowed its successful application to fruit juice and industrial waste water.

  13. Prediction of the Main Engine Power of a New Container Ship at the Preliminary Design Stage

    NASA Astrophysics Data System (ADS)

    Cepowski, Tomasz

    2017-06-01

    The paper presents mathematical relationships that allow us to forecast the estimated main engine power of new container ships, based on data concerning vessels built in 2005-2015. The presented approximations allow us to estimate the engine power based on the length between perpendiculars and the number of containers the ship will carry. The approximations were developed using simple linear regression and multivariate linear regression analysis. The presented relations have practical application for estimation of container ship engine power needed in preliminary parametric design of the ship. It follows from the above that the use of multiple linear regression to predict the main engine power of a container ship brings more accurate solutions than simple linear regression.

  14. Assessing map accuracy in a remotely sensed, ecoregion-scale cover map

    USGS Publications Warehouse

    Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.

    1998-01-01

    Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.

  15. Frictional strengths of talc-serpentine and talc-quartz mixtures

    USGS Publications Warehouse

    Moore, Diane E.; Lockner, D.A.

    2011-01-01

    Talc is a constituent of faults in a variety of settings, and it may be an effective weakening agent depending on its abundance and distribution within a fault. We conducted frictional strength experiments under hydrothermal conditions to determine the effect of talc on the strengths of synthetic gouges of lizardite and antigorite serpentinites and of quartz. Small amounts of talc weaken serpentinite gouges substantially more than predicted by simple weight averaging. In comparison, mixtures of quartz and talc show a linear trend of strength reduction at talc concentrations 15 wt % and enhanced weakening at higher concentrations. All of the strength data are fit by a modified version of the Reuss mixing law that allows for the dominance of one mineral over the other. The difference in the behavior of serpentinite-talc and quartz-talc mixtures at low talc concentrations is a reflection of their different textures. Lizardite, antigorite, and talc all have platy habits, and displacement within gouges composed of these minerals is localized to narrow shears along which the platy grains have rotated into alignment with the shear surfaces. The shears in the mixed phyllosilicate gouges maximize the proportion of the weaker mineral within them. When mixed with a strong, rounded mineral such as quartz, some minimum concentration of talc is needed to form connected pathways that enhance strength reductions. The typical development of talc by the reaction of Si-rich fluids with serpentinite or dolomite would tend to localize its occurrence in a natural fault and result in enhanced weakening.

  16. Are V1 Simple Cells Optimized for Visual Occlusions? A Comparative Study

    PubMed Central

    Bornschein, Jörg; Henniges, Marc; Lücke, Jörg

    2013-01-01

    Simple cells in primary visual cortex were famously found to respond to low-level image components such as edges. Sparse coding and independent component analysis (ICA) emerged as the standard computational models for simple cell coding because they linked their receptive fields to the statistics of visual stimuli. However, a salient feature of image statistics, occlusions of image components, is not considered by these models. Here we ask if occlusions have an effect on the predicted shapes of simple cell receptive fields. We use a comparative approach to answer this question and investigate two models for simple cells: a standard linear model and an occlusive model. For both models we simultaneously estimate optimal receptive fields, sparsity and stimulus noise. The two models are identical except for their component superposition assumption. We find the image encoding and receptive fields predicted by the models to differ significantly. While both models predict many Gabor-like fields, the occlusive model predicts a much sparser encoding and high percentages of ‘globular’ receptive fields. This relatively new center-surround type of simple cell response is observed since reverse correlation is used in experimental studies. While high percentages of ‘globular’ fields can be obtained using specific choices of sparsity and overcompleteness in linear sparse coding, no or only low proportions are reported in the vast majority of studies on linear models (including all ICA models). Likewise, for the here investigated linear model and optimal sparsity, only low proportions of ‘globular’ fields are observed. In comparison, the occlusive model robustly infers high proportions and can match the experimentally observed high proportions of ‘globular’ fields well. Our computational study, therefore, suggests that ‘globular’ fields may be evidence for an optimal encoding of visual occlusions in primary visual cortex. PMID:23754938

  17. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  18. Individual tree diameter increment model for managed even-aged stands of ponderosa pine throughout the western United States using a multilevel linear mixed effects model

    Treesearch

    Fabian C.C. Uzoh; William W. Oliver

    2008-01-01

    A diameter increment model is developed and evaluated for individual trees of ponderosa pine throughout the species range in the United States using a multilevel linear mixed model. Stochastic variability is broken down among period, locale, plot, tree and within-tree components. Covariates acting at tree and stand level, as breast height diameter, density, site index...

  19. The effect of dropout on the efficiency of D-optimal designs of linear mixed models.

    PubMed

    Ortega-Azurduy, S A; Tan, F E S; Berger, M P F

    2008-06-30

    Dropout is often encountered in longitudinal data. Optimal designs will usually not remain optimal in the presence of dropout. In this paper, we study D-optimal designs for linear mixed models where dropout is encountered. Moreover, we estimate the efficiency loss in cases where a D-optimal design for complete data is chosen instead of that for data with dropout. Two types of monotonically decreasing response probability functions are investigated to describe dropout. Our results show that the location of D-optimal design points for the dropout case will shift with respect to that for the complete and uncorrelated data case. Owing to this shift, the information collected at the D-optimal design points for the complete data case does not correspond to the smallest variance. We show that the size of the displacement of the time points depends on the linear mixed model and that the efficiency loss is moderate.

  20. Linear Mixed Models: Gum and Beyond

    NASA Astrophysics Data System (ADS)

    Arendacká, Barbora; Täubner, Angelika; Eichstädt, Sascha; Bruns, Thomas; Elster, Clemens

    2014-04-01

    In Annex H.5, the Guide to the Evaluation of Uncertainty in Measurement (GUM) [1] recognizes the necessity to analyze certain types of experiments by applying random effects ANOVA models. These belong to the more general family of linear mixed models that we focus on in the current paper. Extending the short introduction provided by the GUM, our aim is to show that the more general, linear mixed models cover a wider range of situations occurring in practice and can be beneficial when employed in data analysis of long-term repeated experiments. Namely, we point out their potential as an aid in establishing an uncertainty budget and as means for gaining more insight into the measurement process. We also comment on computational issues and to make the explanations less abstract, we illustrate all the concepts with the help of a measurement campaign conducted in order to challenge the uncertainty budget in calibration of accelerometers.

  1. Theoretical studies of solar oscillations

    NASA Technical Reports Server (NTRS)

    Goldreich, P.

    1980-01-01

    Possible sources for the excitation of the solar 5 minute oscillations were investigated and a linear non-adiabatic stability code was applied to a preliminary study of the solar g-modes with periods near 160 minutes. Although no definitive conclusions concerning the excitation of these modes were reached, the excitation of the 5 minute oscillations by turbulent stresses in the convection zone remains a viable possibility. Theoretical calculations do not offer much support for the identification of the 160 minute global solar oscillation (reported by several independent observers) as a solar g-mode. A significant advance was made in attempting to reconcile mixing-length theory with the results of the calculations of linearly unstable normal modes. Calculations show that in a convective envelope prepared according to mixing length theory, the only linearly unstable modes are those which correspond to the turbulent eddies which are the basic element of the heuristic mixing length theory.

  2. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    PubMed

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  3. Determining major factors controlling phosphorus removal by promising adsorbents used for lake restoration: A linear mixed model approach.

    PubMed

    Funes, A; Martínez, F J; Álvarez-Manzaneda, I; Conde-Porcuna, J M; de Vicente, J; Guerrero, F; de Vicente, I

    2018-05-17

    Phosphorus (P) removal from lake/drainage waters by novel adsorbents may be affected by competitive substances naturally present in the aqueous media. Up to date, the effect of interfering substances has been studied basically on simple matrices (single-factor effects) or by applying basic statistical approaches when using natural lake water. In this study, we determined major factors controlling P removal efficiency in 20 aquatic ecosystems in the southeast Spain by using linear mixed models (LMMs). Two non-magnetic -CFH-12 ® and Phoslock ® - and two magnetic materials -hydrous lanthanum oxide loaded silica-coated magnetite (Fe-Si-La) and commercial zero-valent iron particles (FeHQ)- were tested to remove P at two adsorbent dosages. Results showed that the type of adsorbent, the adsorbent dosage and color of water (indicative of humic substances) are major factors controlling P removal efficiency. Differences in physico-chemical properties (i.e. surface charge or specific surface), composition and structure explain differences in maximum P adsorption capacity and performance of the adsorbents when competitive ions are present. The highest P removal efficiency, independently on whether the adsorbent dosage was low or high, were 85-100% for Phoslock and CFH-12 ® , 70-100% for Fe-Si-La and 0-15% for FeHQ. The low dosage of FeHQ, compared to previous studies, explained its low P removal efficiency. Although non-magnetic materials were the most efficient, magnetic adsorbents (especially Fe-Si-La) could be proposed for P removal as they can be recovered along with P and be reused, potentially making them more profitable in a long-term period. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  5. System and method for generating 3D images of non-linear properties of rock formation using surface seismic or surface to borehole seismic or both

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Cung Khac; Nihei, Kurt Toshimi; Johnson, Paul A.

    A system and method of characterizing properties of a medium from a non-linear interaction are include generating, by first and second acoustic sources disposed on a surface of the medium on a first line, first and second acoustic waves. The first and second acoustic sources are controllable such that trajectories of the first and second acoustic waves intersect in a mixing zone within the medium. The method further includes receiving, by a receiver positioned in a plane containing the first and second acoustic sources, a third acoustic wave generated by a non-linear mixing process from the first and second acousticmore » waves in the mixing zone; and creating a first two-dimensional image of non-linear properties or a first ratio of compressional velocity and shear velocity, or both, of the medium in a first plane generally perpendicular to the surface and containing the first line, based on the received third acoustic wave.« less

  6. Rapid Microfluidic Mixers Utilizing Dispersion Effect and Interactively Time-Pulsed Injection

    NASA Astrophysics Data System (ADS)

    Leong, Jik-Chang; Tsai, Chien-Hsiung; Chang, Chin-Lung; Lin, Chiu-Feng; Fu, Lung-Ming

    2007-08-01

    In this paper, we present a novel active microfluidic mixer utilizing a dispersion effect in an expansion chamber and applying interactively time-pulsed driving voltages to the respective inlet fluid flows to induce electroosmotic flow velocity variations for developing a rapid mixing effect in a microchannel. Without using any additional equipment to induce flow perturbations, only a single high-voltage power source is required for simultaneously driving and mixing sample fluids, which results in a simple and low-cost system for mixing. The effects of the applied main electrical field, interactive frequency, and expansion ratio on the mixing performance are thoroughly examined experimentally and numerically. The mixing ratio can be as high as 95% within a mixing length of 3000 μm downstream from the secondary T-form when a driving electric field strength of 250 V/cm, a periodic switching frequency of 5 Hz, and the expansion ratio M=1:10 are applied. In addition, the optimization of the driving electric field, switching frequency, expansion ratio, expansion entry length, and expansion chamber length for achieving a maximum mixing ratio is also discussed in this study. The novel method proposed in this study can be used for solving the mixing problem in the field of micro-total-analysis systems in a simple manner.

  7. Skin Diseases: Skin and Sun—Not a good mix

    MedlinePlus

    ... Current Issue Past Issues Skin Diseases Skin and Sun —Not a good mix Past Issues / Fall 2008 ... turn Javascript on. Good skin care begins with sun safety. Whether it is something as simple as ...

  8. On conforming mixed finite element methods for incompressible viscous flow problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.

    1982-01-01

    The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.

  9. A generalized interval fuzzy mixed integer programming model for a multimodal transportation problem under uncertainty

    NASA Astrophysics Data System (ADS)

    Tian, Wenli; Cao, Chengxuan

    2017-03-01

    A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.

  10. Nonlinear excitation of the ablative Rayleigh-Taylor instability for all wave numbers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H.; Betti, R.; Gopalaswamy, V.

    Small-scale perturbations in the ablative Rayleigh-Taylor instability (ARTI) are often neglected because they are linearly stable when their wavelength is shorter than a linear cutoff. Using 2D and 3D numerical simulations, it is shown that linearly stable modes of any wavelength can be destabilized. This instability regime requires finite amplitude initial perturbations and linearly stable ARTI modes are more easily destabilized in 3D than in 2D. In conclusion, it is shown that for conditions found in laser fusion targets, short wavelength ARTI modes are more efficient at driving mixing of ablated material throughout the target since the nonlinear bubble densitymore » increases with the wave number and small scale bubbles carry a larger mass flux of mixed material.« less

  11. Nonlinear excitation of the ablative Rayleigh-Taylor instability for all wave numbers

    DOE PAGES

    Zhang, H.; Betti, R.; Gopalaswamy, V.; ...

    2018-01-16

    Small-scale perturbations in the ablative Rayleigh-Taylor instability (ARTI) are often neglected because they are linearly stable when their wavelength is shorter than a linear cutoff. Using 2D and 3D numerical simulations, it is shown that linearly stable modes of any wavelength can be destabilized. This instability regime requires finite amplitude initial perturbations and linearly stable ARTI modes are more easily destabilized in 3D than in 2D. In conclusion, it is shown that for conditions found in laser fusion targets, short wavelength ARTI modes are more efficient at driving mixing of ablated material throughout the target since the nonlinear bubble densitymore » increases with the wave number and small scale bubbles carry a larger mass flux of mixed material.« less

  12. Commande optimale minimisant la consommation d'energie d'un drone utilise comme relai de communication

    NASA Astrophysics Data System (ADS)

    Mechirgui, Monia

    The purpose of this project is to implement an optimal control regulator, particularly the linear quadratic regulator in order to control the position of an unmanned aerial vehicle known as a quadrotor. This type of UAV has a symmetrical and simple structure. Thus, its control is relatively easy compared to conventional helicopters. Optimal control can be proven to be an ideal controller to reconcile between the tracking performance and energy consumption. In practice, the linearity requirements are not met, but some elaborations of the linear quadratic regulator have been used in many nonlinear applications with good results. The linear quadratic controller used in this thesis is presented in two forms: simple and adapted to the state of charge of the battery. Based on the traditional structure of the linear quadratic regulator, we introduced a new criterion which relies on the state of charge of the battery, in order to optimize energy consumption. This command is intended to be used to monitor and maintain the desired trajectory during several maneuvers while minimizing energy consumption. Both simple and adapted, linear quadratic controller are implemented in Simulink in discrete time. The model simulates the dynamics and control of a quadrotor. Performance and stability of the system are analyzed with several tests, from the simply hover to the complex trajectories in closed loop.

  13. The roll-up and merging of coherent structures in shallow mixing layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, M. Y., E-mail: celmy@connect.ust.hk; Ghidaoui, M. S.; Kolyshkin, A. A.

    2016-09-15

    The current study seeks a fundamental explanation to the development of two-dimensional coherent structures (2DCSs) in shallow mixing layers. A nonlinear numerical model based on the depth-averaged shallow water equations is used to investigate the temporal evolution of shallow mixing layers, where the mapping from temporal to spatial results is made using the velocity at the center of the mixing layers. The flow is periodic in the streamwise direction. Transmissive boundary conditions are used in the cross-stream boundaries to prevent reflections. Numerical results are compared to linear stability analysis, mean-field theory, and secondary stability analysis. Results suggest that the onsetmore » and development of 2DCS in shallow mixing layers are the result of a sequence of instabilities governed by linear theory, mean-field theory, and secondary stability theory. The linear instability of the shearing velocity gradient gives the onset of 2DCS. When the perturbations reach a certain amplitude, the flow field of the perturbations changes from a wavy shape to a vortical (2DCS) structure because of nonlinearity. The development of the vertical 2DCS does not appear to follow weakly nonlinear theory; instead, it follows mean-field theory. After the formation of 2DCS, separate 2DCSs merge to form larger 2DCS. In this way, 2DCSs grow and shallow mixing layers develop and grow in scale. The merging of 2DCS in shallow mixing layers is shown to be caused by the secondary instability of the 2DCS. Eventually 2DCSs are dissipated by bed friction. The sequence of instabilities can cause the upscaling of the turbulent kinetic energy in shallow mixing layers.« less

  14. Mixed-charge nanoparticles for long circulation, low reticuloendothelial system clearance, and high tumor accumulation.

    PubMed

    Liu, Xiangsheng; Li, Huan; Chen, Yangjun; Jin, Qiao; Ren, Kefeng; Ji, Jian

    2014-09-01

    Mixed-charge zwitterionic surface modification shows great potential as a simple strategy to fabricate nanoparticle (NP) surfaces that are nonfouling. Here, the in vivo fate of 16 nm mixed-charge gold nanoparticles (AuNPs) is investigated, coated with mixed quaternary ammonium and sulfonic groups. The results show that mixed-charge AuNPs have a much longer blood half-life (≈30.6 h) than do poly(ethylene glycol) (PEG, M¯w = 2000) -coated AuNPs (≈6.65 h) and they accumulate in the liver and spleen far less than do the PEGylated AuNPs. Using transmission electron microscopy, it is further confirmed that the mixed-charge AuNPs have much lower uptake and different existing states in liver Kupffer cells and spleen macrophages one month after injection compared with the PEGylated AuNPs. Moreover, these mixed-charge AuNPs do not cause appreciable toxicity at this tested dose to mice in a period of 1 month as evidenced by histological examinations. Importantly, the mixed-charge AuNPs have higher accumulation and slower clearance in tumors than do PEGylated AuNPs for times of 24-72 h. Results from this work show promise for effectively designing tumor-targeting NPs that can minimize reticuloendothelial system clearance and circulate for long periods by using a simple mixed-charge strategy. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  16. Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems

    DOE PAGES

    Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; ...

    2012-01-01

    Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less

  17. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B., E-mail: friedman11@llnl.gov; Lawrence Livermore National Laboratory, Livermore, California 94550; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. We define such amore » non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. We test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  18. A non-modal analytical method to predict turbulent properties applied to the Hasegawa-Wakatani model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, B.; Carter, T. A.

    2015-01-15

    Linear eigenmode analysis often fails to describe turbulence in model systems that have non-normal linear operators and thus nonorthogonal eigenmodes, which can cause fluctuations to transiently grow faster than expected from eigenmode analysis. When combined with energetically conservative nonlinear mode mixing, transient growth can lead to sustained turbulence even in the absence of eigenmode instability. Since linear operators ultimately provide the turbulent fluctuations with energy, it is useful to define a growth rate that takes into account non-modal effects, allowing for prediction of energy injection, transport levels, and possibly even turbulent onset in the subcritical regime. Here, we define suchmore » a non-modal growth rate using a relatively simple model of the statistical effect that the nonlinearities have on cross-phases and amplitude ratios of the system state variables. In particular, we model the nonlinearities as delta-function-like, periodic forces that randomize the state variables once every eddy turnover time. Furthermore, we estimate the eddy turnover time to be the inverse of the least stable eigenmode frequency or growth rate, which allows for prediction without nonlinear numerical simulation. Also, we test this procedure on the 2D and 3D Hasegawa-Wakatani model [A. Hasegawa and M. Wakatani, Phys. Rev. Lett. 50, 682 (1983)] and find that the non-modal growth rate is a good predictor of energy injection rates, especially in the strongly non-normal, fully developed turbulence regime.« less

  19. A Simple Mechanism for Cooperation in the Well-Mixed Prisoner's Dilemma Game

    NASA Astrophysics Data System (ADS)

    Perc, Matjaž

    2008-11-01

    I show that the addition of Gaussian noise to the payoffs is able to stabilize cooperation in well-mixed populations, where individuals play the prisoner's dilemma game. The impact of stochasticity on the evolutionary dynamics can be expressed deterministically via a simple small-noise expansion of multiplicative noisy terms. In particular, cooperation emerges as a stable noise-induced steady state in the replicator dynamics. Due to the generality of the employed theoretical framework, presented results should prove valuable in various scientific disciplines, ranging from economy to ecology.

  20. Dual energy CT: How to best blend both energies in one fused image?

    NASA Astrophysics Data System (ADS)

    Eusemann, Christian; Holmes, David R., III; Schmidt, Bernhard; Flohr, Thomas G.; Robb, Richard; McCollough, Cynthia; Hough, David M.; Huprich, James E.; Wittmer, Michael; Siddiki, Hasan; Fletcher, Joel G.

    2008-03-01

    In x-ray based imaging, attenuation depends on the type of tissue scanned and the average energy level of the x-ray beam, which can be adjusted via the x-ray tube potential. Conventional computed tomography (CT) imaging uses a single kV value, usually 120kV. Dual energy CT uses two different tube potentials (e.g. 80kV & 140kV) to obtain two image datasets with different attenuation characteristics. This difference in attenuation levels allows for classification of the composition of the tissues. In addition, the different energies significantly influence the contrast resolution and noise characteristics of the two image datasets. 80kV images provide greater contrast resolution than 140kV, but are limited because of increased noise. While dual-energy CT may provide useful clinical information, the question arises as to how to best realize and visualize this benefit. In conventional single energy CT, patient image data is presented to the physicians using well understood organ specific window and level settings. Instead of viewing two data series (one for each tube potential), the images are most often fused into a single image dataset using a linear mixing of the data with a 70% 140kV and a 30% 80kV mixing ratio, as available on one commercial systems. This ratio provides a reasonable representation of the anatomy/pathology, however due to the linear nature of the blending, the advantages of each dataset (contrast or sharpness) is partially offset by its drawbacks (blurring or noise). This project evaluated a variety of organ specific linear and non-linear mixing algorithms to optimize the blending of the low and high kV information for display in a way that combines the benefits (contrast and sharpness) of both energies in a single image. A blinded review analysis by subspecialty abdominal radiologists found that, unique, tunable, non-linear mixing algorithms that we developed outperformed linear, fixed mixing for a variety of different organs and pathologies of interest.

  1. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  2. Superpixel Based Factor Analysis and Target Transformation Method for Martian Minerals Detection

    NASA Astrophysics Data System (ADS)

    Wu, X.; Zhang, X.; Lin, H.

    2018-04-01

    The Factor analysis and target transformation (FATT) is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES) and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM) hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC) algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.

  3. Controlled Thermoresponsive Hydrogels by Stereocomplexed PLA-PEG-PLA Prepared via Hybrid Micelles of Pre-Mixed Copolymers with Different PEG Lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abebe, Daniel G.; Fujiwara, Tomoko

    2012-09-05

    The stereocomplexed hydrogels derived from the micelle mixture of two enantiomeric triblock copolymers, PLLA-PEG-PLLA and PDLA-PEG-PDLA, reported in 2001 exhibited sol-to-gel transition at approximately body temperature upon heating. However, the showed poor storage modulus (ca. 1000 Pa) determined their insufficiency as injectable implant biomaterials for many applications. In this study, the mechanical property of these hydrogels was significantly improved by the modifications of molecular weights and micelle structure. Co-micelles composed of block copolymers with two sizes of PEG block length were shown to possess unique and dissimilar properties from the micelles composed of single-sized block copolymers. The stereomixture of PLA-PEG-PLAmore » comicelles showed a controllable sol-to-gel transition at a wide temperature range of 4 and 80 C. The sol-gel phase diagram displays a linear relationship of temperature versus copolymer composition; hence, a transition at body temperature can be readily achieved by adjusting the mixed copolymer ratio. The resulting thermoresponsive hydrogels exhibit a storage modulus notably higher (ca. 6000 Pa) than that of previously reported hydrogels. As a physical network solely governed by self-reorganization of micelles, followed by stereocomplexation, this unique system offers practical, safe, and simple implantable biomaterials.« less

  4. Knowledge evolution in physics research: An analysis of bibliographic coupling networks

    PubMed Central

    Nanetti, Andrea; Cheong, Siew Ann

    2017-01-01

    Even as we advance the frontiers of physics knowledge, our understanding of how this knowledge evolves remains at the descriptive levels of Popper and Kuhn. Using the American Physical Society (APS) publications data sets, we ask in this paper how new knowledge is built upon old knowledge. We do so by constructing year-to-year bibliographic coupling networks, and identify in them validated communities that represent different research fields. We then visualize their evolutionary relationships in the form of alluvial diagrams, and show how they remain intact through APS journal splits. Quantitatively, we see that most fields undergo weak Popperian mixing, and it is rare for a field to remain isolated/undergo strong mixing. The sizes of fields obey a simple linear growth with recombination. We can also reliably predict the merging between two fields, but not for the considerably more complex splitting. Finally, we report a case study of two fields that underwent repeated merging and splitting around 1995, and how these Kuhnian events are correlated with breakthroughs on Bose-Einstein condensation (BEC), quantum teleportation, and slow light. This impact showed up quantitatively in the citations of the BEC field as a larger proportion of references from during and shortly after these events. PMID:28922427

  5. A simple molecular mechanics integrator in mixed rigid body and dihedral angle space

    PubMed Central

    Vitalis, Andreas; Pappu, Rohit V.

    2014-01-01

    We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear in the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom. PMID:25053299

  6. How large are the consequences of covariate imbalance in cluster randomized trials: a simulation study with a continuous outcome and a binary covariate at the cluster level.

    PubMed

    Moerbeek, Mirjam; van Schie, Sander

    2016-07-11

    The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.

  7. No way out? The double-bind in seeking global prosperity alongside mitigated climate change

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2012-01-01

    In a prior study (Garrett, 2011), I introduced a simple economic growth model designed to be consistent with general thermodynamic laws. Unlike traditional economic models, civilization is viewed only as a well-mixed global whole with no distinction made between individual nations, economic sectors, labor, or capital investments. At the model core is a hypothesis that the global economy's current rate of primary energy consumption is tied through a constant to a very general representation of its historically accumulated wealth. Observations support this hypothesis, and indicate that the constant's value is λ = 9.7 ± 0.3 milliwatts per 1990 US dollar. It is this link that allows for treatment of seemingly complex economic systems as simple physical systems. Here, this growth model is coupled to a linear formulation for the evolution of globally well-mixed atmospheric CO2 concentrations. While very simple, the coupled model provides faithful multi-decadal hindcasts of trajectories in gross world product (GWP) and CO2. Extending the model to the future, the model suggests that the well-known IPCC SRES scenarios substantially underestimate how much CO2 levels will rise for a given level of future economic prosperity. For one, global CO2 emission rates cannot be decoupled from wealth through efficiency gains. For another, like a long-term natural disaster, future greenhouse warming can be expected to act as an inflationary drag on the real growth of global wealth. For atmospheric CO2 concentrations to remain below a "dangerous" level of 450 ppmv (Hansen et al., 2007), model forecasts suggest that there will have to be some combination of an unrealistically rapid rate of energy decarbonization and nearly immediate reductions in global civilization wealth. Effectively, it appears that civilization may be in a double-bind. If civilization does not collapse quickly this century, then CO2 levels will likely end up exceeding 1000 ppmv; but, if CO2 levels rise by this much, then the risk is that civilization will gradually tend towards collapse.

  8. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  9. Evaluating Treatment and Generalization Patterns of Two Theoretically Motivated Sentence Comprehension Therapies

    PubMed Central

    Des Roches, Carrie A.; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David

    2016-01-01

    Purpose The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Method Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Results Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Conclusions Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type. PMID:27997950

  10. Long-term Erythrocytapheresis Is Associated With Reduced Liver Iron Concentration in Sickle Cell Disease.

    PubMed

    Myers, Scott N; Eid, Ryan; Myers, John; Bertolone, Salvatore; Panigrahi, Arun; Mullinax, Jennifer; Raj, Ashok B

    2016-01-01

    Erythrocytapheresis procedures are increasingly used in sickle cell disease. Serum ferritin and noninvasive magnetic resonance imaging measurements of liver iron concentration (LIC) are frequently used to monitor iron overload secondary to hypertransfusion. There is a paucity of data describing the impact of long-term erythrocytapheresis (LTE) on LIC. We measured magnetic resonance imaging liver and cardiac iron on LTE subjects and stratified them into 2 groups: higher LIC (>3 mg/g) and lower LIC (<3 mg/g). χ(2) and t test were used to test for differences between the 2 groups. Logistic regression and generalized linear mixed-effects models were used to test what impacted LIC. None of 29 sickle cell disease subjects maintained on LTE had high cardiac iron concentration. LIC was associated with serum ferritin (r=0.697, P<0.001) but was not associated with the total number of LTE procedures (r=-0.088, P=0.656) or total number of simple transfusions (r=0.316, P=0.108). The total number of LTE procedures was not associated with serum ferritin (r=0.040, P=0.838), the total number of simple transfusions (r=-0.258, P=0.184), or LIC group (r=-0.111, P=0.566). There was no significant correlation between duration of LTE maintenance and LIC.

  11. Influence of Water Saturation on Thermal Conductivity in Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2009-04-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  12. Dependence of Thermal Conductivity on Water Saturation of Sandstones

    NASA Astrophysics Data System (ADS)

    Fehr, A.; Jorand, R.; Koch, A.; Clauser, C.

    2008-12-01

    Information on thermal conductivity of rocks and soils is essential in applied geothermal and hydrocarbon maturation research. In this study, we investigate the dependence of thermal conductivity on the degree of water saturation. Measurements were made on five sandstones from different outcrops in Germany. In a first step, we characterized the samples with respect to mineralogical composition, porosity, and microstructure by nuclear magnetic resonance (NMR) and mercury injection. We measured thermal conductivity with an optical scanner at different levels of water saturation. Finally we present a simple and easy model for the correlation of thermal conductivity and water saturation. Thermal conductivity decreases in the course of the drying of the rock. This behaviour is not linear and depends on the microstructure of the studied rock. We studied different mixing models for three phases: mineral skeleton, water and air. For argillaceous sandstones a modified arithmetic model works best which considers the irreducible water volume and different pore sizes. For pure quartz sandstones without clay minerals, we use the same model for low water saturations, but for high water saturations a modified geometric model. A clayey sandstone rich in feldspath shows a different behaviour which cannot be explained by simple models. A better understanding will require measurements on additional samples which will help to improve the derived correlations and substantiate our findings.

  13. Comparison of linear and non-linear models for predicting energy expenditure from raw accelerometer data.

    PubMed

    Montoye, Alexander H K; Begum, Munni; Henning, Zachary; Pfeiffer, Karin A

    2017-02-01

    This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r  =  0.71-0.88, RMSE: 1.11-1.61 METs; p  >  0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r  =  0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r  =  0.88, RMSE: 1.10-1.11 METs; p  >  0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r  =  0.88, RMSE: 1.12 METs. Linear models-correlations: r  =  0.86, RMSE: 1.18-1.19 METs; p  <  0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r  =  0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r  =  0.71-0.73, RMSE: 1.55-1.61 METs; p  <  0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanna, T.; Vijayajayanthi, M.; Lakshmanan, M.

    The bright soliton solutions of the mixed coupled nonlinear Schroedinger equations with two components (2-CNLS) with linear self- and cross-coupling terms have been obtained by identifying a transformation that transforms the corresponding equation to the integrable mixed 2-CNLS equations. The study on the collision dynamics of bright solitons shows that there exists periodic energy switching, due to the coupling terms. This periodic energy switching can be controlled by the new type of shape changing collisions of bright solitons arising in a mixed 2-CNLS system, characterized by intensity redistribution, amplitude dependent phase shift, and relative separation distance. We also point outmore » that this system exhibits large periodic intensity switching even with very small linear self-coupling strengths.« less

  15. Characterizing entanglement with global and marginal entropic measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adesso, Gerardo; Illuminati, Fabrizio; De Siena, Silvio

    2003-12-01

    We qualify the entanglement of arbitrary mixed states of bipartite quantum systems by comparing global and marginal mixednesses quantified by different entropic measures. For systems of two qubits we discriminate the class of maximally entangled states with fixed marginal mixednesses, and determine an analytical upper bound relating the entanglement of formation to the marginal linear entropies. This result partially generalizes to mixed states the quantification of entanglement with marginal mixednesses holding for pure states. We identify a class of entangled states that, for fixed marginals, are globally more mixed than product states when measured by the linear entropy. Such statesmore » cannot be discriminated by the majorization criterion.« less

  16. Ejecta patterns of Meteor Crater, Arizona derived from the linear un-mixing of TIMS data and laboratory thermal emission spectra

    NASA Technical Reports Server (NTRS)

    Ramsey, Michael S.; Christensen, Philip R.

    1992-01-01

    Accurate interpretation of thermal infrared data depends upon the understanding and removal of complicating effects. These effects may include physical mixing of various mineralogies and particle sizes, atmospheric absorption and emission, surficial coatings, geometry effects, and differential surface temperatures. The focus is the examination of the linear spectral mixing of individual mineral or endmember spectra. Linear addition of spectra, for particles larger than the wavelength, allows for a straight-forward method of deconvolving the observed spectra, predicting a volume percent of each endmember. The 'forward analysis' of linear mixing (comparing the spectra of physical mixtures to numerical mixtures) has received much attention. The reverse approach of un-mixing thermal emission spectra was examined with remotely sensed data, but no laboratory verification exists. Understanding of the effects of spectral mixing on high resolution laboratory spectra allows for the extrapolation to lower resolution, and often more complicated, remotely gathered data. Thermal Infrared Multispectral Scanner (TIMS) data for Meteor Crater, Arizona were acquired in Sep. 1987. The spectral un-mixing of these data gives a unique test of the laboratory results. Meteor Crater (1.2 km in diameter and 180 m deep) is located in north-central Arizona, west of Canyon Diablo. The arid environment, paucity of vegetation, and low relief make the region ideal for remote data acquisition. Within the horizontal sedimentary sequence that forms the upper Colorado Plateau, the oldest unit sampled by the impact crater was the Permian Coconino Sandstone. A thin bed of the Toroweap Formation, also of Permian age, conformably overlays the Coconino. Above the Toroweap lies the Permian Kiabab Limestone which, in turn, is covered by a thin veneer of the Moenkopi Formation. The Moenkopi is Triassic in age and has two distinct sub-units in the vicinity of the crater. The lower Wupatki member is a fine-grained sandstone, while the upper Moqui member is a fissile siltstone. Ejecta from these units are preserved as inverted stratigraphy up to 2 crater radii from the rim. The mineralogical contrast between the units, relative lack of post-emplacement erosion and ejecta mixing provide a unique site to apply the un-mixing model. Selection of the aforementioned units as endmembers reveals distinct patterns in the ejecta of the crater.

  17. Mixing Hot and Cold Water Streams at a T-Junction

    ERIC Educational Resources Information Center

    Sharp, David; Zhang, Mingqian; Xu, Zhenghe; Ryan, Jim; Wanke, Sieghard; Afacan, Artin

    2008-01-01

    A simple mixing of a hot- and cold-water stream at a T-junction was investigated. The main objective was to use mass and energy balance equations to predict mass low rates and the temperature of the mixed stream after the T-junction, and then compare these with the measured values. Furthermore, the thermocouple location after the T-junction and…

  18. An acoustofluidic micromixer based on oscillating sidewall sharp-edges.

    PubMed

    Huang, Po-Hsun; Xie, Yuliang; Ahmed, Daniel; Rufo, Joseph; Nama, Nitesh; Chen, Yuchao; Chan, Chung Yu; Huang, Tony Jun

    2013-10-07

    Rapid and homogeneous mixing inside a microfluidic channel is demonstrated via the acoustic streaming phenomenon induced by the oscillation of sidewall sharp-edges. By optimizing the design of the sharp-edges, excellent mixing performance and fast mixing speed can be achieved in a simple device, making our sharp-edge-based acoustic micromixer a promising candidate for a wide variety of applications.

  19. Microfluidic T-form mixer utilizing switching electroosmotic flow.

    PubMed

    Lin, Che-Hsin; Fu, Lung-Ming; Chien, Yu-Sheng

    2004-09-15

    This paper presents a microfluidic T-form mixer utilizing alternatively switching electroosmotic flow. The microfluidic device is fabricated on low-cost glass slides using a simple and reliable fabrication process. A switching DC field is used to generate an electroosmotic force which simultaneously drives and mixes the fluid samples. The proposed design eliminates the requirements for moving parts within the microfluidic device and delicate external control systems. Two operation modes, namely, a conventional switching mode and a novel pinched switching mode, are presented. Computer simulation is employed to predict the mixing performance attainable in both operation modes. The simulation results are then compared to those obtained experimentally. It is shown that a mixing performance as high as 97% can be achieved within a mixing distance of 1 mm downstream from the T-junction when a 60 V/cm driving voltage and a 2-Hz switching frequency are applied in the pinched switching operation mode. This study demonstrates how the driving voltage and switching frequency can be optimized to yield an enhanced mixing performance. The novel methods presented in this study provide a simple solution to mixing problems in the micro-total-analysis-systems field.

  20. Extended Mixed-Efects Item Response Models with the MH-RM Algorithm

    ERIC Educational Resources Information Center

    Chalmers, R. Philip

    2015-01-01

    A mixed-effects item response theory (IRT) model is presented as a logical extension of the generalized linear mixed-effects modeling approach to formulating explanatory IRT models. Fixed and random coefficients in the extended model are estimated using a Metropolis-Hastings Robbins-Monro (MH-RM) stochastic imputation algorithm to accommodate for…

  1. Mode Reduction and Upscaling of Reactive Transport Under Incomplete Mixing

    NASA Astrophysics Data System (ADS)

    Lester, D. R.; Bandopadhyay, A.; Dentz, M.; Le Borgne, T.

    2016-12-01

    Upscaling of chemical reactions in partially-mixed fluid environments is a challenging problem due to the detailed interactions between inherently nonlinear reaction kinetics and complex spatio-temporal concentration distributions under incomplete mixing. We address this challenge via the development of an order reduction method for the advection-diffusion-reaction equation (ADRE) via projection of the reaction kinetics onto a small number N of leading eigenmodes of the advection-diffusion operator (the so-called "strange eigenmodes" of the flow) as an N-by-N nonlinear system, whilst mixing dynamics only are projected onto the remaining modes. For simple kinetics and moderate Péclet and Damkhöler numbers, this approach yields analytic solutions for the concentration mean, evolving spatio-temporal distribution and PDF in terms of the well-mixed reaction kinetics and mixing dynamics. For more complex kinetics or large Péclet or Damkhöler numbers only a small number of modes are required to accurately quantify the mixing and reaction dynamics in terms of the concentration field and PDF, facilitating greatly simplified approximation and analysis of reactive transport. Approximate solutions of this low-order nonlinear system provide quantiative predictions of the evolving concentration PDF. We demonstrate application of this method to a simple random flow and various mass-action reaction kinetics.

  2. Mixed ice accretion on aircraft wings

    NASA Astrophysics Data System (ADS)

    Janjua, Zaid A.; Turnbull, Barbara; Hibberd, Stephen; Choi, Kwing-So

    2018-02-01

    Ice accretion is a problematic natural phenomenon that affects a wide range of engineering applications including power cables, radio masts, and wind turbines. Accretion on aircraft wings occurs when supercooled water droplets freeze instantaneously on impact to form rime ice or runback as water along the wing to form glaze ice. Most models to date have ignored the accretion of mixed ice, which is a combination of rime and glaze. A parameter we term the "freezing fraction" is defined as the fraction of a supercooled droplet that freezes on impact with the top surface of the accretion ice to explore the concept of mixed ice accretion. Additionally we consider different "packing densities" of rime ice, mimicking the different bulk rime densities observed in nature. Ice accretion is considered in four stages: rime, primary mixed, secondary mixed, and glaze ice. Predictions match with existing models and experimental data in the limiting rime and glaze cases. The mixed ice formulation however provides additional insight into the composition of the overall ice structure, which ultimately influences adhesion and ice thickness, and shows that for similar atmospheric parameter ranges, this simple mixed ice description leads to very different accretion rates. A simple one-dimensional energy balance was solved to show how this freezing fraction parameter increases with decrease in atmospheric temperature, with lower freezing fraction promoting glaze ice accretion.

  3. Overshooting thunderstorm cloud top dynamics as approximated by a linear Lagrangian parcel model with analytic exact solutions

    NASA Technical Reports Server (NTRS)

    Schlesinger, Robert E.

    1990-01-01

    Results are presented from a linear Lagrangian entraining parcel model of an overshooting thunderstorm cloud top. The model, which is similar to that of Adler and Mack (1986), gives analytic exact solutions for vertical velocity and temperature by representing mixing with Rayleigh damping instead of nonlinearly. Model results are presented for various combinations of stratospheric lapse rate, drag intensity, and mixing strength. The results are compared to those of Adler and Mack.

  4. A New Linearized Crank-Nicolson Mixed Element Scheme for the Extended Fisher-Kolmogorov Equation

    PubMed Central

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L 2(Ω))2 space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L 2 and H 1-norm for both the scalar unknown u and the diffusion term w = −Δu and a priori error estimates in (L 2)2-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes. PMID:23864831

  5. A new linearized Crank-Nicolson mixed element scheme for the extended Fisher-Kolmogorov equation.

    PubMed

    Wang, Jinfeng; Li, Hong; He, Siriguleng; Gao, Wei; Liu, Yang

    2013-01-01

    We present a new mixed finite element method for solving the extended Fisher-Kolmogorov (EFK) equation. We first decompose the EFK equation as the two second-order equations, then deal with a second-order equation employing finite element method, and handle the other second-order equation using a new mixed finite element method. In the new mixed finite element method, the gradient ∇u belongs to the weaker (L²(Ω))² space taking the place of the classical H(div; Ω) space. We prove some a priori bounds for the solution for semidiscrete scheme and derive a fully discrete mixed scheme based on a linearized Crank-Nicolson method. At the same time, we get the optimal a priori error estimates in L² and H¹-norm for both the scalar unknown u and the diffusion term w = -Δu and a priori error estimates in (L²)²-norm for its gradient χ = ∇u for both semi-discrete and fully discrete schemes.

  6. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  7. Linear stability analysis of particle-laden hypopycnal plumes

    NASA Astrophysics Data System (ADS)

    Farenzena, Bruno Avila; Silvestrini, Jorge Hugo

    2017-12-01

    Gravity-driven riverine outflows are responsible for carrying sediments to the coastal waters. The turbulent mixing in these flows is associated with shear and gravitational instabilities such as Kelvin-Helmholtz, Holmboe, and Rayleigh-Taylor. Results from temporal linear stability analysis of a two-layer stratified flow are presented, investigating the behavior of settling particles and mixing region thickness on the flow stability in the presence of ambient shear. The particles are considered suspended in the transport fluid, and its sedimentation is modeled with a constant valued settling velocity. Three scenarios, regarding the mixing region thickness, were identified: the poorly mixed environment, the strong mixed environment, and intermediate scenario. It was observed that Kelvin-Helmholtz and settling convection modes are the two fastest growing modes depending on the particles settling velocity and the total Richardson number. The second scenario presents a modified Rayleigh-Taylor instability, which is the dominant mode. The third case can have Kelvin-Helmholtz, settling convection, and modified Rayleigh-Taylor modes as the fastest growing mode depending on the combination of parameters.

  8. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  9. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  10. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  11. A Simple Piece of Apparatus to Aid the Understanding of the Relationship between Angular Velocity and Linear Velocity

    ERIC Educational Resources Information Center

    Unsal, Yasin

    2011-01-01

    One of the subjects that is confusing and difficult for students to fully comprehend is the concept of angular velocity and linear velocity. It is the relationship between linear and angular velocity that students find difficult; most students understand linear motion in isolation. In this article, we detail the design, construction and…

  12. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  13. Tests of Parameterized Langmuir Circulation Mixing in the Oceans Surface Mixed Layer II

    DTIC Science & Technology

    2017-08-11

    inertial oscillations in the ocean are governed by three-dimensional processes that are not accounted for in a one-dimensional simulation , and it was...Unlimited 52 Paul Martin (228) 688-5447 Recent large-eddy simulations (LES) of Langmuir circulation (LC) within the surface mixed layer (SML) of...used in the Navy Coastal Ocean Model (NCOM) and tested for (a) a simple wind-mixing case, (b) simulations of the upper ocean thermal structure at Ocean

  14. Menu-Driven Solver Of Linear-Programming Problems

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.; Ferencz, D.

    1992-01-01

    Program assists inexperienced user in formulating linear-programming problems. A Linear Program Solver (ALPS) computer program is full-featured LP analysis program. Solves plain linear-programming problems as well as more-complicated mixed-integer and pure-integer programs. Also contains efficient technique for solution of purely binary linear-programming problems. Written entirely in IBM's APL2/PC software, Version 1.01. Packed program contains licensed material, property of IBM (copyright 1988, all rights reserved).

  15. Reflection-mode micro-spherical fiber-optic probes for in vitro real-time and single-cell level pH sensing.

    PubMed

    Yang, Qingbo; Wang, Hanzheng; Lan, Xinwei; Cheng, Baokai; Chen, Sisi; Shi, Honglan; Xiao, Hai; Ma, Yinfa

    2015-02-01

    pH sensing at the single-cell level without negatively affecting living cells is very important but still a remaining issue in the biomedical studies. A 70 μm reflection-mode fiber-optic micro-pH sensor was designed and fabricated by dip-coating thin layer of organically modified aerogel onto a tapered spherical probe head. A pH sensitive fluorescent dye 2', 7'-Bis (2-carbonylethyl)-5(6)-carboxyfluorescein (BCECF) was employed and covalently bonded within the aerogel networks. By tuning the alkoxide mixing ratio and adjusting hexamethyldisilazane (HMDS) priming procedure, the sensor can be optimized to have high stability and pH sensing ability. The in vitro real-time sensing capability was then demonstrated in a simple spectroscopic way, and showed linear measurement responses with a pH resolution up to an average of 0.049 pH unit within a narrow, but biological meaningful pH range of 6.12-7.81. Its novel characterizations of high spatial resolution, reflection mode operation, fast response and high stability, great linear response within biological meaningful pH range and high pH resolutions, make this novel pH probe a very cost-effective tool for chemical/biological sensing, especially within the single cell level research field.

  16. Reflection-mode micro-spherical fiber-optic probes for in vitro real-time and single-cell level pH sensing

    PubMed Central

    Yang, Qingbo; Wang, Hanzheng; Lan, Xinwei; Cheng, Baokai; Chen, Sisi; Shi, Honglan; Xiao, Hai; Ma, Yinfa

    2014-01-01

    pH sensing at the single-cell level without negatively affecting living cells is very important but still a remaining issue in the biomedical studies. A 70 μm reflection-mode fiber-optic micro-pH sensor was designed and fabricated by dip-coating thin layer of organically modified aerogel onto a tapered spherical probe head. A pH sensitive fluorescent dye 2′, 7′-Bis (2-carbonylethyl)-5(6)-carboxyfluorescein (BCECF) was employed and covalently bonded within the aerogel networks. By tuning the alkoxide mixing ratio and adjusting hexamethyldisilazane (HMDS) priming procedure, the sensor can be optimized to have high stability and pH sensing ability. The in vitro real-time sensing capability was then demonstrated in a simple spectroscopic way, and showed linear measurement responses with a pH resolution up to an average of 0.049 pH unit within a narrow, but biological meaningful pH range of 6.12–7.81. Its novel characterizations of high spatial resolution, reflection mode operation, fast response and high stability, great linear response within biological meaningful pH range and high pH resolutions, make this novel pH probe a very cost-effective tool for chemical/biological sensing, especially within the single cell level research field. PMID:25530670

  17. Sensitive sub-Doppler nonlinear spectroscopy for hyperfine-structure analysis using simple atomizers

    NASA Astrophysics Data System (ADS)

    Mickadeit, Fritz K.; Kemp, Helen; Schafer, Julia; Tong, William M.

    1998-05-01

    Laser wave-mixing spectroscopy is presented as a sub-Doppler method that offers not only high spectral resolution, but also excellent detection sensitivity. It offers spectral resolution suitable for hyperfine structure analysis and isotope ratio measurements. In a non-planar backward- scattering four-wave mixing optical configuration, two of the three input beams counter propagate and the Doppler broadening is minimized, and hence, spectral resolution is enhanced. Since the signal is a coherent beam, optical collection is efficient and signal detection is convenient. This simple multi-photon nonlinear laser method offers un usually sensitive detection limits that are suitable for trace-concentration isotope analysis using a few different types of simple analytical atomizers. Reliable measurement of hyperfine structures allows effective determination of isotope ratios for chemical analysis.

  18. DIVERSE MODELS FOR SOLVING CONTRASTING OUTFALL PROBLEMS

    EPA Science Inventory

    Mixing zone initial dilution and far-field models are useful for assuring that water quality criteria will be met when specific outfall discharge criteria are applied. Presented here is a selective review of mixing zone initial dilution models and relatively simple far-field tran...

  19. Flow-rate independent gas-mixing system for drift chambers, using solenoid valves

    NASA Astrophysics Data System (ADS)

    Sugano, K.

    1991-03-01

    We describe an inexpensive system for mixing argon and ethane gas for drift chambers which was used for an experiment at Fermilab. This system is based on the idea of intermittent mixing of gases with fixed mixing flow rates. A dual-action pressure switch senses the pressure in a mixed gas reservoir tank and operates solenoid valves to control mixing action and regulate reservoir pressure. This system has the advantages that simple controls accurately regulate the mixing ratio and that the mixing ratio is nearly flow-rate independent without readjustments. We also report the results of the gas analysis of various samplings, and the reliability of the system in long-term running.

  20. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  1. Simple linear and multivariate regression models.

    PubMed

    Rodríguez del Águila, M M; Benítez-Parejo, N

    2011-01-01

    In biomedical research it is common to find problems in which we wish to relate a response variable to one or more variables capable of describing the behaviour of the former variable by means of mathematical models. Regression techniques are used to this effect, in which an equation is determined relating the two variables. While such equations can have different forms, linear equations are the most widely used form and are easy to interpret. The present article describes simple and multiple linear regression models, how they are calculated, and how their applicability assumptions are checked. Illustrative examples are provided, based on the use of the freely accessible R program. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  2. Turbulence closure for mixing length theories

    NASA Astrophysics Data System (ADS)

    Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.

    2018-05-01

    We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.

  3. Stimulus sensitive gel with radioisotope and methods of making

    DOEpatents

    Weller, Richard E.; Lind, Michael A.; Fisher, Darrell R.; Gutowska, Anna; Campbell, Allison A.

    2005-03-22

    The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth-]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer with an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.

  4. Stimulus sensitive gel with radioisotope and methods of making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weller, Richard E; Lind, Michael A; Fisher, Darrell R

    2001-10-02

    The present invention is a thermally reversible stimulus-sensitive gel or gelling copolymer radioisotope carrier that is a linear random copolymer of an [meth]acrylamide derivative and a hydrophilic comonomer, wherein the linear random copolymer is in the form of a plurality of linear chains having a plurality of molecular weights greater than or equal to a minimum gelling molecular weight cutoff. Addition of a biodegradable backbone and/or a therapeutic agent imparts further utility. The method of the present invention for making a thermally reversible stimulus-sensitive gelling copolymer radionuclcide carrier has the steps of: (a) mixing a stimulus-sensitive reversible gelling copolymer withmore » an aqueous solvent as a stimulus-sensitive reversible gelling solution; and (b) mixing a radioisotope with said stimulus-sensitive reversible gelling solution as said radioisotope carrier. The gel is enhanced by either combining it with a biodegradable backbone and/or a therapeutic agent in a gelling solution made by mixing the copolymer with an aqueous solvent.« less

  5. Separation and reconstruction of high pressure water-jet reflective sound signal based on ICA

    NASA Astrophysics Data System (ADS)

    Yang, Hongtao; Sun, Yuling; Li, Meng; Zhang, Dongsu; Wu, Tianfeng

    2011-12-01

    The impact of high pressure water-jet on the different materials target will produce different reflective mixed sound. In order to reconstruct the reflective sound signals distribution on the linear detecting line accurately and to separate the environment noise effectively, the mixed sound signals acquired by linear mike array were processed by ICA. The basic principle of ICA and algorithm of FASTICA were described in detail. The emulation experiment was designed. The environment noise signal was simulated by using band-limited white noise and the reflective sound signal was simulated by using pulse signal. The reflective sound signal attenuation produced by the different distance transmission was simulated by weighting the sound signal with different contingencies. The mixed sound signals acquired by linear mike array were synthesized by using the above simulated signals and were whitened and separated by ICA. The final results verified that the environment noise separation and the reconstruction of the detecting-line sound distribution can be realized effectively.

  6. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    PubMed

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  7. High-dose nifuratel for simple and mixed aerobic vaginitis: A single-center prospective open-label cohort study.

    PubMed

    Liang, Qian; Li, Nan; Song, Shurong; Zhang, Aihua; Li, Ni; Duan, Ying

    2016-10-01

    The efficacy and safety of two nifuratel dosages for the treatment of aerobic vaginitis (AV) were compared. This was a prospective open-label cohort study of patients diagnosed and treated at the Tianjin Third Central Hospital between January 2012 and December 2013. The co-presence of bacterial vaginosis (BV), vulvovaginal candidiasis (VVC), or/and trichomonal vaginitis (TV; mixed AV) was determined. Patients were randomized to nifuratel-500 (500 mg nifuratel, intravaginal, 10 days) or nifuratel-250 (250 mg nifuratel, intravaginal, 10 days), and followed-up for three to seven days after treatment completion. Primary and secondary outcomes were recovery rate and adverse events, respectively. The study included 142 patients with AV. Age was not significantly different between the groups (n = 71 each), and disease distribution was identical: 29 (40.85%) simple AV and 42 (59.15%) mixed AV (AV + BV, 42.86 %; AV + VVC, 30.95%; AV + TV, 26.19%). In patients with simple AV, the recovery rate did not differ significantly between the nifuratel-500 (26/29, 89.66%) and nifuratel-250 (22/29, 75.86%) groups. In patients with mixed AV, recovery rates were significantly higher in the nifuratel-500 than in the nifuratel-250 group (AV + BV, 88.89% vs 50.00 %; AV + VVC, 76.92 % vs 30.77 %; AV + TV, 90.91 % vs 36.36%; all P < 0.05). Only one patient (nifuratel-500) reported an adverse event (mild anaphylactic reaction). Nifuratel 500 mg showed good clinical efficacy for the treatment of AV, particularly mixed AV, and is superior to the 250 mg dosage in the treatment of mixed AV. © 2016 Japan Society of Obstetrics and Gynecology.

  8. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users

    PubMed Central

    Scheperle, Rachel A.; Abbas, Paul J.

    2014-01-01

    Objectives The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor. Conclusions The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. PMID:25658746

  9. BIODEGRADATION PROBABILITY PROGRAM (BIODEG)

    EPA Science Inventory

    The Biodegradation Probability Program (BIODEG) calculates the probability that a chemical under aerobic conditions with mixed cultures of microorganisms will biodegrade rapidly or slowly. It uses fragment constants developed using multiple linear and non-linear regressions and d...

  10. CFD simulation of vertical linear motion mixing in anaerobic digester tanks.

    PubMed

    Meroney, Robert N; Sheker, Robert E

    2014-09-01

    Computational fluid dynamics (CFD) was used to simulate the mixing characteristics of a small circular anaerobic digester tank (diameter 6 m) equipped sequentially with 13 different plunger type vertical linear motion mixers and two different type internal draft-tube mixers. Rates of mixing of step injection of tracers were calculated from which active volume (AV) and hydraulic retention time (HRT) could be calculated. Washout characteristics were compared to analytic formulae to estimate any presence of partial mixing, dead volume, short-circuiting, or piston flow. Active volumes were also estimated based on tank regions that exceeded minimum velocity criteria. The mixers were ranked based on an ad hoc criteria related to the ratio of AV to unit power (UP) or AV/UP. The best plunger mixers were found to behave about the same as the conventional draft-tube mixers of similar UP.

  11. Strengthen forensic entomology in court--the need for data exploration and the validation of a generalised additive mixed model.

    PubMed

    Baqué, Michèle; Amendt, Jens

    2013-01-01

    Developmental data of juvenile blow flies (Diptera: Calliphoridae) are typically used to calculate the age of immature stages found on or around a corpse and thus to estimate a minimum post-mortem interval (PMI(min)). However, many of those data sets don't take into account that immature blow flies grow in a non-linear fashion. Linear models do not supply a sufficient reliability on age estimates and may even lead to an erroneous determination of the PMI(min). According to the Daubert standard and the need for improvements in forensic science, new statistic tools like smoothing methods and mixed models allow the modelling of non-linear relationships and expand the field of statistical analyses. The present study introduces into the background and application of these statistical techniques by analysing a model which describes the development of the forensically important blow fly Calliphora vicina at different temperatures. The comparison of three statistical methods (linear regression, generalised additive modelling and generalised additive mixed modelling) clearly demonstrates that only the latter provided regression parameters that reflect the data adequately. We focus explicitly on both the exploration of the data--to assure their quality and to show the importance of checking it carefully prior to conducting the statistical tests--and the validation of the resulting models. Hence, we present a common method for evaluating and testing forensic entomological data sets by using for the first time generalised additive mixed models.

  12. An acoustofluidic micromixer based on oscillating sidewall sharp-edges†

    PubMed Central

    Huang, Po-Hsun; Xie, Yuliang; Ahmed, Daniel; Rufo, Joseph; Nama, Nitesh; Chen, Yuchao; Chan, Chung Yu; Huang, Tony Jun

    2014-01-01

    Rapid and homogeneous mixing inside a microfluidic channel is demonstrated via the acoustic streaming phenomenon induced by the oscillation of sidewall sharp-edges. By optimizing the design of the sharp-edges, excellent mixing performance and fast mixing speed can be achieved in a simple device, making our sharp-edge-based acoustic micromixer a promising candidate for a wide variety of applications. PMID:23896797

  13. Woody Biomass Conversion to JP-8 Fuels

    DTIC Science & Technology

    2014-02-15

    Fermentation of Conditioned Extract or Brownstock to Lipids SUB 5 Mixed Culture Fermentation of Mixed-sugars in Raw extract to Mixed Acids SUB 6 TDO...avoiding the need for producing clean simple sugars tluough controlled hydrolysis, and detoxification in a particular case of fermentation ...according to high temperature simulated distillation (ASTM 7169) shown in Figure 5. Figure 5: Boiling point distribution data for raw TDO

  14. Application of FTA technology to extraction of sperm DNA from mixed body fluids containing semen.

    PubMed

    Fujita, Yoshihiko; Kubo, Shin-ichi

    2006-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. In this study, we report a rapid and simple method of extracting DNA from sperm when body fluids mixed with semen were collected using FTA cards. After proteinase K digestion of the sperm and body fluid mixture, the washed pellet suspension as the sperm fraction and the concentrated supernatant as the epithelial cell fraction were respectively applied to FTA cards containing DTT. The FTA cards were dried, then directly added to a polymerase chain reaction (PCR) mix and processed by PCR. The time required from separation of the mixed fluid into sperm and epithelial origin DNA extractions was only about 2.5-3h. Furthermore, the procedure was extremely simple. It is considered that our designed DNA extraction procedure using an FTA card is available for application to routine work.

  15. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  16. Mixed linear-non-linear inversion of crustal deformation data: Bayesian inference of model, weighting and regularization parameters

    NASA Astrophysics Data System (ADS)

    Fukuda, Jun'ichi; Johnson, Kaj M.

    2010-06-01

    We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.

  17. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  18. Environmental enrichment for a mixed-species nocturnal mammal exhibit.

    PubMed

    Clark, Fay E; Melfi, Vicky A

    2012-01-01

    Environmental enrichment (EE) is an integral aspect of modern zoo animal management but, empirical evaluation of it is biased toward species housed in single-species groups. Nocturnal houses, where several nocturnal species are housed together, are particularly overlooked. This study investigated whether three species (nine-banded armadillos, Dasypus novemcinctus; Senegal bush babies, Galago senegalensis; two-toed sloths, Choloepus didactylus) in the nocturnal house at Paignton Zoo Environmental Park, UK could be enriched using food-based and sensory EE. Subjects were an adult male and female of each species. EE was deemed effective if it promoted target species-typical behaviors, behavioral diversity, and increased use of enriched exhibit zones. Results from generalized linear mixed models demonstrated that food-based EE elicited the most positive behavioral effects across species. One set of food-based EEs (Kong®, termite mound and hanging food) presented together was associated with a significant increase in species-typical behaviors, increased behavioral diversity, and increased use of enriched exhibit zones in armadillos and bush babies. Although one type of sensory EE (scented pine cones) increased overall exhibit use in all species, the other (rainforest sounds) was linked to a significant decrease in species-typical behavior in bush babies and sloths. There were no intra or interspecies conflicts over EE, and commensalism occurred between armadillos and bush babies. Our data demonstrate that simple food-based and sensory EE can promote positive behavioral changes in a mixed-species nocturnal mammal exhibit. We suggest that both food and sensory EE presented concurrently will maximize opportunities for naturalistic activity in all species. © 2011 Wiley Periodicals, Inc.

  19. Dispersion, mode-mixing and the electron-phonon interaction in nanostructures

    NASA Astrophysics Data System (ADS)

    Dyson, A.; Ridley, B. K.

    2018-03-01

    The electron-phonon interaction with polar optical modes in nanostructures is re-examined in the light of phonon dispersion relations and the role of the Fuchs-Kliewer (FK) mode. At an interface between adjacent polar materials the frequencies of the FK mode are drawn from the dielectric constants of the adjacent materials and are significantly smaller than the corresponding frequencies of the longitudinal optic (LO) modes at the zone centre. The requirement that all polar modes satisfy mechanical and electrical boundary conditions forces the modes to become hybrids. For a hybrid to have both FK and LO components the LO mode must have the FK frequency, which can only come about through the reduction associated with phonon dispersion relations. We illustrate the effect of phonon dispersion relations on the Fröhlich interaction by considering a simple linear-chain model of the zincblende lattice. Optical and acoustic modes become mixed towards short wavelengths in both optical and acoustic branches. A study of GaAs, InP and cubic GaN and AlN shows that the polarity of the optical branch and the acousticity of the acoustic branch are reduced by dispersion in equal measures, but the effect is relatively weak. Coupling coefficients quantifying the strengths of the interaction with electrons for optical and acoustic components of mixed modes in the optical branch show that, in most cases, the polar interaction dominates the acoustic interaction, and it is reduced from the long-wavelength result towards the zone boundary by only a few percent. The effect on the lower-frequency FK mode can be large.

  20. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Cliff

    2015-01-01

    Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  1. Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface

    NASA Technical Reports Server (NTRS)

    Brown, Clifford A.

    2016-01-01

    Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.

  2. Using crosscorrelation techniques to determine the impulse response of linear systems

    NASA Technical Reports Server (NTRS)

    Dallabetta, Michael J.; Li, Harry W.; Demuth, Howard B.

    1993-01-01

    A crosscorrelation method of measuring the impulse response of linear systems is presented. The technique, implementation, and limitations of this method are discussed. A simple system is designed and built using discrete components and the impulse response of a linear circuit is measured. Theoretical and software simulation results are presented.

  3. A Randomized, Double-Blinded, Placebo-Controlled Study to Compare the Safety and Efficacy of Low Dose Enhanced Wild Blueberry Powder and Wild Blueberry Extract (ThinkBlue™) in Maintenance of Episodic and Working Memory in Older Adults.

    PubMed

    Whyte, Adrian R; Cheng, Nancy; Fromentin, Emilie; Williams, Claire M

    2018-05-23

    Previous research has shown beneficial effects of polyphenol-rich diets in ameliorating cognitive decline in aging adults. Here, using a randomized, double blinded, placebo-controlled chronic intervention, we investigated the effect of two proprietary blueberry formulations on cognitive performance in older adults; a whole wild blueberry powder at 500 mg (WBP500) and 1000 mg (WBP1000) and a purified extract at 100 mg (WBE111). One hundred and twenty-two older adults (65⁻80 years) were randomly allocated to a 6-month, daily regimen of either placebo or one of the three interventions. Participants were tested at baseline, 3, and 6 months on a battery of cognitive tasks targeting episodic memory, working memory and executive function, alongside mood and cardiovascular health parameters. Linear mixed model analysis found intervention to be a significant predictor of delayed word recognition on the Reys Auditory Verbal Learning Task (RAVLT), with simple contrast analysis revealing significantly better performance following WBE111 at 3 months. Similarly, performance on the Corsi Block task was predicted by treatment, with simple contrast analysis revealing a trend for better performance at 3 months following WBE111. Treatment also significantly predicted systolic blood pressure (SBP) with simple contrast analysis revealing lower SBP following intervention with WBE111 in comparison to placebo. These results indicate 3 months intervention with WBE111 can facilitate better episodic memory performance in an elderly population and reduce cardiovascular risk factors over 6 months.

  4. System and method for investigating sub-surface features and 3D imaging of non-linear property, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt

    A system and a method for generating a three-dimensional image of a rock formation, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation are provided. A first acoustic signal includes a first plurality of pulses. A second acoustic signal from a second source includes a second plurality of pulses. A detected signal returning to the borehole includes a signal generated by a non-linear mixing process from the first and second acoustic signals in a non-linear mixing zone within an intersection volume. The received signal is processed to extract the signal over noise and/or signals resultingmore » from linear interaction and the three dimensional image of is generated.« less

  5. A simple-shear rheometer for linear viscoelastic characterization of vocal fold tissues at phonatory frequencies.

    PubMed

    Chan, Roger W; Rodriguez, Maritza L

    2008-08-01

    Previous studies reporting the linear viscoelastic shear properties of the human vocal fold cover or mucosa have been based on torsional rheometry, with measurements limited to low audio frequencies, up to around 80 Hz. This paper describes the design and validation of a custom-built, controlled-strain, linear, simple-shear rheometer system capable of direct empirical measurements of viscoelastic shear properties at phonatory frequencies. A tissue specimen was subjected to simple shear between two parallel, rigid acrylic plates, with a linear motor creating a translational sinusoidal displacement of the specimen via the upper plate, and the lower plate transmitting the harmonic shear force resulting from the viscoelastic response of the specimen. The displacement of the specimen was measured by a linear variable differential transformer whereas the shear force was detected by a piezoelectric transducer. The frequency response characteristics of these system components were assessed by vibration experiments with accelerometers. Measurements of the viscoelastic shear moduli (G' and G") of a standard ANSI S2.21 polyurethane material and those of human vocal fold cover specimens were made, along with estimation of the system signal and noise levels. Preliminary results showed that the rheometer can provide valid and reliable rheometric data of vocal fold lamina propria specimens at frequencies of up to around 250 Hz, well into the phonatory range.

  6. Dynamics and hydrodynamic mixing of reactive solutes at stable fresh-salt interfaces

    NASA Astrophysics Data System (ADS)

    van der Zee, Sjoerd E. A. T. M.; Eeman, Sara; Cirkel, Gijsbert; Leijnse, Toon

    2014-05-01

    In coastal zones with saline groundwater, but also in semi-arid regions, fresh groundwater lenses may form due to infiltration of rain water. The thickness of both the lens and the mixing zone, determines fresh water availability for plant growth. Due to recharge variation, the thickness of the lens and the mixing zone are not constant, which may adversely affect agricultural and natural vegetation if saline water reaches the root zone during the growing season. A similar situation is found in situations where groundwater is not saline, but has a different chemical signature than rainwater-affected groundwater. Then also, vegetation patches and botanic biodiversity may depend sensitively on the depth of the interface between different types of groundwater. In this presentation, we study the response of thin lenses and their mixing zone to variation of recharge. The recharge is varied using sinusoids with a range of amplitudes and frequencies. We vary lens properties by varying the Rayleigh number and Mass flux ratio of saline and fresh water, as these dominate on the thickness of thin lenses and their mixing zone. Numerical results show a linear relation between the normalised lens volume and the main lens and recharge characteristics, enabling an empirical approximation of the variation of lens thickness. Increase of the recharge amplitude causes increase and the increase of recharge frequency causes a decrease in the variation of lens thickness. The average lens thickness is not significantly influenced by these variations in recharge, contrary to the mixing zone thickness. The mixing zone thickness is compared to that of a Fickian mixing regime. A simple relation between the travelled distance of the centre of the mixing zone position due to variations in recharge and the mixing zone thickness is shown to be valid for both a sinusoidal recharge variation and actual records of irregularly varying daily recharge data. Starting from a step response function, convolution can be used to determine the effect of variable recharge in time. For a sinusoidal curve, we can determine delay of lens movement compared to the recharge curve as well as the lens amplitude, derived from the convolution integral. Together the proposed equations provide us with a first order approximation of lens characteristics using basic lens and recharge parameters without the use of numerical models. This enables the assessment of the vulnerability of any thin fresh water lens on saline, upward seeping groundwater to salinity stress in the root zone.

  7. A simple attitude control of quadrotor helicopter based on Ziegler-Nichols rules for tuning PD parameters.

    PubMed

    He, ZeFang; Zhao, Long

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.

  8. ALPS: A Linear Program Solver

    NASA Technical Reports Server (NTRS)

    Ferencz, Donald C.; Viterna, Larry A.

    1991-01-01

    ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.

  9. Triangulation 2.0

    ERIC Educational Resources Information Center

    Denzin, Norman K.

    2012-01-01

    The author's thesis is simple and direct. Those in the mixed methods qualitative inquiry community need a new story line, one that does not confuse pragmatism for triangulation, and triangulation for mixed methods research (MMR). A different third way is required, one that inspires generative politics and dialogic democracy and helps shape…

  10. EVALUATION OF MIXING ENERGY IN FLASKS USED FOR DISPERSANT EFFECTIVENESS TESTING

    EPA Science Inventory

    A U.S. Environmental Protection Agency (EPA) laboratory screening protocol for dispersant effectiveness consists of placing water, oil, and a dispersant in a flask and mixing the contents on an orbital shaker. Two flasks are being investigated, a simple Erlenmeyer (used in EPA's...

  11. Code Samples Used for Complexity and Control

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  12. Modelling melting in crustal environments, with links to natural systems in the Nepal Himalayas

    NASA Astrophysics Data System (ADS)

    Isherwood, C.; Holland, T.; Bickle, M.; Harris, N.

    2003-04-01

    Melt bodies of broadly granitic character occur frequently in mountain belts such as the Himalayan chain which exposes leucogranitic intrusions along its entire length (e.g. Le Fort, 1975). The genesis and disposition of these bodies have considerable implications for the development of tectonic evolution models for such mountain belts. However, melting processes and melt migration behaviour are influenced by many factors (Hess, 1995; Wolf &McMillan, 1995) which are as yet poorly understood. Recent improvements in internally consistent thermodynamic datasets have allowed the modelling of simple granitic melt systems (Holland &Powell, 2001) at pressures below 10 kbar, of which Himalayan leucogranites provide a good natural example. Model calculations such as these have been extended to include an asymmetrical melt-mixing model based on the Van Laar approach, which uses volumes (or pseudovolumes) for the different end-members in a mixture to control the asymmetry of non-ideal mixing. This asymmetrical formalism has been used in conjunction with several different entropy of mixing assumptions in an attempt to find the closest fit to available experimental data for melting in simple binary and ternary haplogranite systems. The extracted mixing data are extended to more complex systems and allow the construction of phase relations in NKASH necessary to model simple haplogranitic melts involving albite, K-feldspar, quartz, sillimanite and {H}2{O}. The models have been applied to real bulk composition data from Himalayan leucogranites.

  13. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  14. VENVAL : a plywood mill cost accounting program

    Treesearch

    Henry Spelter

    1991-01-01

    This report documents a package of computer programs called VENVAL. These programs prepare plywood mill data for a linear programming (LP) model that, in turn, calculates the optimum mix of products to make, given a set of technologies and market prices. (The software to solve a linear program is not provided and must be obtained separately.) Linear programming finds...

  15. A novel micromixer based on the alternating current-flow field effect transistor.

    PubMed

    Wu, Yupan; Ren, Yukun; Tao, Ye; Hou, Likai; Hu, Qingming; Jiang, Hongyuan

    2016-12-20

    Induced-charge electroosmosis (ICEO) phenomena have been attracting considerable attention as a means for pumping and mixing in microfluidic systems with the advantage of simple structures and low-energy consumption. We propose the first effort to exploit a fixed-potential ICEO flow around a floating electrode for microfluidic mixing. In analogy with the field effect transistor (FET) in microelectronics, the floating electrode act as a "gate" electrode for generating asymmetric ICEO flow and thus the device is called an AC-flow FET (AC-FFET). We take advantage of a tandem electrode configuration containing two biased center metal strips arranged in sequence at the bottom of the channel to generate asymmetric vortexes. The current device is manufactured on low-cost glass substrates via an easy and reliable process. Mixing experiments were conducted in the proposed device and the comparison between simulation and experimental results was also carried out, which indicates that the micromixer permits an efficient mixing effect. The mixing performance can be further enhanced by the application of a suitable phase difference between the driving electrode and the gate electrode or a square wave signal. Finally, we performed a critical analysis of the proposed micromixer in comparison with different mixer designs using a comparative mixing index (CMI). The novel methods put forward here offer a simple solution to mixing issues in microfluidic systems.

  16. How Darcy's equation is linked to the linear reservoir at catchment scale

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2017-04-01

    In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.

  17. The non-linear response of a muscle in transverse compression: assessment of geometry influence using a finite element model.

    PubMed

    Gras, Laure-Lise; Mitton, David; Crevier-Denoix, Nathalie; Laporte, Sébastien

    2012-01-01

    Most recent finite element models that represent muscles are generic or subject-specific models that use complex, constitutive laws. Identification of the parameters of such complex, constitutive laws could be an important limit for subject-specific approaches. The aim of this study was to assess the possibility of modelling muscle behaviour in compression with a parametric model and a simple, constitutive law. A quasi-static compression test was performed on the muscles of dogs. A parametric finite element model was designed using a linear, elastic, constitutive law. A multi-variate analysis was performed to assess the effects of geometry on muscle response. An inverse method was used to define Young's modulus. The non-linear response of the muscles was obtained using a subject-specific geometry and a linear elastic law. Thus, a simple muscle model can be used to have a bio-faithful, biomechanical response.

  18. A Simple Numerical Procedure for the Simulation of "Lifelike" Linear-Sweep Voltammograms

    NASA Astrophysics Data System (ADS)

    Bozzini, Benedetto P.

    2000-01-01

    Practical linear-sweep voltammograms seldom resemble the theoretical ones shown in textbooks. This is because several phenomena (activation, mass transport, ohmic resistance) control the kinetics over different potential ranges scanned during the potential sweep. These effects are generally treated separately in the didactic literature, yet they have never been "assembled" in a way that allows the educational use of real experiments. This makes linear-sweep voltammetric experiments almost unusable in the teaching of physical chemistry. A simple approach to the classroom description of "lifelike" experimental results is proposed in this paper. Analytical expressions of linear sweep voltammograms are provided. The actual numerical evaluations can be carried out with a pocket calculator. Two typical examples are executed and comparison with experimental data is described. This approach to teaching electrode kinetics has proved an effective tool to provide students with an insight into the effects of electrochemical parameters and operating conditions.

  19. Planar micromixer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiechtner, Gregory J; Singh, Anup K; Wiedenman, Boyd J

    2008-03-18

    The present embodiment describes a laminar-mixing embodiment that utilizes simple, three-dimensional injection. Also described is the use of the embodiment in combination with wide and shallow sections of channel to affect rapid mixing in microanalytical systems. The shallow channel sections are constructed using all planar micromachining techniques, including those based on isotropic etching. The planar construction enables design using minimum dispersion concepts that, in turn, enable simultaneous mixing and injection into subsequent chromatography channels.

  20. Valuation of financial models with non-linear state spaces

    NASA Astrophysics Data System (ADS)

    Webber, Nick

    2001-02-01

    A common assumption in valuation models for derivative securities is that the underlying state variables take values in a linear state space. We discuss numerical implementation issues in an interest rate model with a simple non-linear state space, formulating and comparing Monte Carlo, finite difference and lattice numerical solution methods. We conclude that, at least in low dimensional spaces, non-linear interest rate models may be viable.

  1. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  2. Correcting for population structure and kinship using the linear mixed model: theory and extensions.

    PubMed

    Hoffman, Gabriel E

    2013-01-01

    Population structure and kinship are widespread confounding factors in genome-wide association studies (GWAS). It has been standard practice to include principal components of the genotypes in a regression model in order to account for population structure. More recently, the linear mixed model (LMM) has emerged as a powerful method for simultaneously accounting for population structure and kinship. The statistical theory underlying the differences in empirical performance between modeling principal components as fixed versus random effects has not been thoroughly examined. We undertake an analysis to formalize the relationship between these widely used methods and elucidate the statistical properties of each. Moreover, we introduce a new statistic, effective degrees of freedom, that serves as a metric of model complexity and a novel low rank linear mixed model (LRLMM) to learn the dimensionality of the correction for population structure and kinship, and we assess its performance through simulations. A comparison of the results of LRLMM and a standard LMM analysis applied to GWAS data from the Multi-Ethnic Study of Atherosclerosis (MESA) illustrates how our theoretical results translate into empirical properties of the mixed model. Finally, the analysis demonstrates the ability of the LRLMM to substantially boost the strength of an association for HDL cholesterol in Europeans.

  3. Mathematical modeling of the crack growth in linear elastic isotropic materials by conventional fracture mechanics approaches and by molecular dynamics method: crack propagation direction angle under mixed mode loading

    NASA Astrophysics Data System (ADS)

    Stepanova, Larisa; Bronnikov, Sergej

    2018-03-01

    The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.

  4. Ehrenfest's Lottery--Time and Entropy Maximization

    ERIC Educational Resources Information Center

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  5. Measurement of Refractive Index Gradients by Deflection of a Laser Beam

    ERIC Educational Resources Information Center

    Barnard, A. J.; Ahlborn, B.

    1975-01-01

    In this simple experiment for an undergraduate laboratory a laser beam is passed through the mixing zone of two liquids with different refractive indices. The spatial variation of the refractive index, at different times during the mixing, can be determined from the observed deflection of the beam. (Author)

  6. Factors affecting alcohol-water pervaporation performance of hydrophobic zeolite-silicone rubber mixed matrix membranes

    EPA Science Inventory

    Mixed matrix membranes (MMMs) consisting of ZSM-5 zeolite particles dispersed in silicone rubber exhibited ethanol-water pervaporation permselectivities up to 5 times that of silicone rubber alone and 3 times higher than simple vapor-liquid equilibrium (VLE). A number of conditi...

  7. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  8. A Linear Theory for Inflatable Plates of Arbitrary Shape

    NASA Technical Reports Server (NTRS)

    McComb, Harvey G., Jr.

    1961-01-01

    A linear small-deflection theory is developed for the elastic behavior of inflatable plates of which Airmat is an example. Included in the theory are the effects of a small linear taper in the depth of the plate. Solutions are presented for some simple problems in the lateral deflection and vibration of constant-depth rectangular inflatable plates.

  9. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  10. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs.

    PubMed

    Ishak, K Jack; Stolar, Marilyn; Hu, Ming-yi; Alvarez, Piedad; Wang, Yamei; Getsios, Denis; Williams, Gregory C

    2012-12-01

    Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs.Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike's Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs.

  11. Accounting for the relationship between per diem cost and LOS when estimating hospitalization costs

    PubMed Central

    2012-01-01

    Background Hospitalization costs in clinical trials are typically derived by multiplying the length of stay (LOS) by an average per-diem (PD) cost from external sources. This assumes that PD costs are independent of LOS. Resource utilization in early days of the stay is usually more intense, however, and thus, the PD cost for a short hospitalization may be higher than for longer stays. The shape of this relationship is unlikely to be linear, as PD costs would be expected to gradually plateau. This paper describes how to model the relationship between PD cost and LOS using flexible statistical modelling techniques. Methods An example based on a clinical study of clevidipine for the treatment of peri-operative hypertension during hospitalizations for cardiac surgery is used to illustrate how inferences about cost-savings associated with good blood pressure (BP) control during the stay can be affected by the approach used to derive hospitalization costs. Data on the cost and LOS of hospitalizations for coronary artery bypass grafting (CABG) from the Massachusetts Acute Hospital Case Mix Database (the MA Case Mix Database) were analyzed to link LOS to PD cost, factoring in complications that may have occurred during the hospitalization or post-discharge. The shape of the relationship between LOS and PD costs in the MA Case Mix was explored graphically in a regression framework. A series of statistical models including those based on simple logarithmic transformation of LOS to more flexible models using LOcally wEighted Scatterplot Smoothing (LOESS) techniques were considered. A final model was selected, using simplicity and parsimony as guiding principles in addition traditional fit statistics (like Akaike’s Information Criterion, or AIC). This mapping was applied in ECLIPSE to predict an LOS-specific PD cost, and then a total cost of hospitalization. These were then compared for patients who had good vs. poor peri-operative blood-pressure control. Results The MA Case Mix dataset included data from over 10,000 patients. Visual inspection of PD vs. LOS revealed a non-linear relationship. A logarithmic model and a series of LOESS and piecewise-linear models with varying connection points were tested. The logarithmic model was ultimately favoured for its fit and simplicity. Using this mapping in the ECLIPSE trials, we found that good peri-operative BP control was associated with a cost savings of $5,366 when costs were derived using the mapping, compared with savings of $7,666 obtained using the traditional approach of calculating the cost. Conclusions PD costs vary systematically with LOS, with short stays being associated with high PD costs that drop gradually and level off. The shape of the relationship may differ in other settings. It is important to assess this and model the observed pattern, as this may have an impact on conclusions based on derived hospitalization costs. PMID:23198908

  12. Racial/Ethnic Differences in Sexual Network Mixing: A Log-Linear Analysis of HIV Status by Partnership and Sexual Behavior Among Most at-Risk MSM.

    PubMed

    Fujimoto, Kayo; Williams, Mark L

    2015-06-01

    Mixing patterns within sexual networks have been shown to have an effect on HIV transmission, both within and across groups. This study examined sexual mixing patterns involving HIV-unknown status and risky sexual behavior conditioned on assortative/dissortative mixing by race/ethnicity. The sample used for this study consisted of drug-using male sex workers and their male sex partners. A log-linear analysis of 257 most at-risk MSM and 3,072 sex partners was conducted. The analysis found two significant patterns. HIV-positive most at-risk Black MSM had a strong tendency to have HIV-unknown Black partners (relative risk, RR = 2.91, p < 0.001) and to engage in risky sexual behavior (RR = 2.22, p < 0.001). White most at-risk MSM with unknown HIV status also had a tendency to engage in risky sexual behavior with Whites (RR = 1.72, p < 0.001). The results suggest that interventions that target the most at-risk MSM and their sex partners should account for specific sexual network mixing patterns by HIV status.

  13. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  14. Cosmological N -body simulations with generic hot dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N -body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  15. Cosmological N-body simulations with generic hot dark matter

    NASA Astrophysics Data System (ADS)

    Brandbyge, Jacob; Hannestad, Steen

    2017-10-01

    We have calculated the non-linear effects of generic fermionic and bosonic hot dark matter components in cosmological N-body simulations. For sub-eV masses, the non-linear power spectrum suppression caused by thermal free-streaming resembles the one seen for massive neutrinos, whereas for masses larger than 1 eV, the non-linear relative suppression of power is smaller than in linear theory. We furthermore find that in the non-linear regime, one can map fermionic to bosonic models by performing a simple transformation.

  16. A Simple Model of Cirrus Horizontal Inhomogeneity and Cloud Fraction

    NASA Technical Reports Server (NTRS)

    Smith, Samantha A.; DelGenio, Anthony D.

    1998-01-01

    A simple model of horizontal inhomogeneity and cloud fraction in cirrus clouds has been formulated on the basis that all internal horizontal inhomogeneity in the ice mixing ratio is due to variations in the cloud depth, which are assumed to be Gaussian. The use of such a model was justified by the observed relationship between the normalized variability of the ice water mixing ratio (and extinction) and the normalized variability of cloud depth. Using radar cloud depth data as input, the model reproduced well the in-cloud ice water mixing ratio histograms obtained from horizontal runs during the FIRE2 cirrus campaign. For totally overcast cases the histograms were almost Gaussian, but changed as cloud fraction decreased to exponential distributions which peaked at the lowest nonzero ice value for cloud fractions below 90%. Cloud fractions predicted by the model were always within 28% of the observed value. The predicted average ice water mixing ratios were within 34% of the observed values. This model could be used in a GCM to produce the ice mixing ratio probability distribution function and to estimate cloud fraction. It only requires basic meteorological parameters, the depth of the saturated layer and the standard deviation of cloud depth as input.

  17. Options for refractive index and viscosity matching to study variable density flows

    NASA Astrophysics Data System (ADS)

    Clément, Simon A.; Guillemain, Anaïs; McCleney, Amy B.; Bardet, Philippe M.

    2018-02-01

    Variable density flows are often studied by mixing two miscible aqueous solutions of different densities. To perform optical diagnostics in such environments, the refractive index of the fluids must be matched, which can be achieved by carefully choosing the two solutes and the concentration of the solutions. To separate the effects of buoyancy forces and viscosity variations, it is desirable to match the viscosity of the two solutions in addition to their refractive index. In this manuscript, several pairs of index matched fluids are compared in terms of viscosity matching, monetary cost, and practical use. Two fluid pairs are studied in detail, with two aqueous solutions (binary solutions of water and a salt or alcohol) mixed into a ternary solution. In each case: an aqueous solution of isopropanol mixed with an aqueous solution of sodium chloride (NaCl) and an aqueous solution of glycerol mixed with an aqueous solution of sodium sulfate (Na_2SO_4). The first fluid pair allows reaching high-density differences at low cost, but brings a large difference in dynamic viscosity. The second allows matching dynamic viscosity and refractive index simultaneously, at reasonable cost. For each of these four solutes, the density, kinematic viscosity, and refractive index are measured versus concentration and temperature, as well as wavelength for the refractive index. To investigate non-linear effects when two index-matched, binary solutions are mixed, the ternary solutions formed are also analyzed. Results show that density and refractive index follow a linear variation with concentration. However, the viscosity of the isopropanol and NaCl pair deviates from the linear law and has to be considered. Empirical correlations and their coefficients are given to create index-matched fluids at a chosen temperature and wavelength. Finally, the effectiveness of the refractive index matching is illustrated with particle image velocimetry measurements performed for a buoyant jet in a linearly stratified environment. The creation of the index-matched solutions and linear stratification in a large-scale experimental facility are detailed, as well as the practical challenges to obtain precise refractive index matching.

  18. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  19. Statistical quality assessment criteria for a linear mixing model with elliptical t-distribution errors

    NASA Astrophysics Data System (ADS)

    Manolakis, Dimitris G.

    2004-10-01

    The linear mixing model is widely used in hyperspectral imaging applications to model the reflectance spectra of mixed pixels in the SWIR atmospheric window or the radiance spectra of plume gases in the LWIR atmospheric window. In both cases it is important to detect the presence of materials or gases and then estimate their amount, if they are present. The detection and estimation algorithms available for these tasks are related but they are not identical. The objective of this paper is to theoretically investigate how the heavy tails observed in hyperspectral background data affect the quality of abundance estimates and how the F-test, used for endmember selection, is robust to the presence of heavy tails when the model fits the data.

  20. The Linear Mixing Approximation for Planetary Ices

    NASA Astrophysics Data System (ADS)

    Bethkenhagen, M.; Meyer, E. R.; Hamel, S.; Nettelmann, N.; French, M.; Scheibe, L.; Ticknor, C.; Collins, L. A.; Kress, J. D.; Fortney, J. J.; Redmer, R.

    2017-12-01

    We investigate the validity of the widely used linear mixing approximation for the equations of state (EOS) of planetary ices, which are thought to dominate the interior of the ice giant planets Uranus and Neptune. For that purpose we perform density functional theory molecular dynamics simulations using the VASP code.[1] In particular, we compute 1:1 binary mixtures of water, ammonia, and methane, as well as their 2:1:4 ternary mixture at pressure-temperature conditions typical for the interior of Uranus and Neptune.[2,3] In addition, a new ab initio EOS for methane is presented. The linear mixing approximation is verified for the conditions present inside Uranus ranging up to 10 Mbar based on the comprehensive EOS data set. We also calculate the diffusion coefficients for the ternary mixture along different Uranus interior profiles and compare them to the values of the pure compounds. We find that deviations of the linear mixing approximation from the real mixture are generally small; for the EOS they fall within about 4% uncertainty while the diffusion coefficients deviate up to 20% . The EOS of planetary ices are applied to adiabatic models of Uranus. It turns out that a deep interior of almost pure ices is consistent with the gravity field data, in which case the planet becomes rather cold (T core ˜ 4000 K). [1] G. Kresse and J. Hafner, Physical Review B 47, 558 (1993). [2] R. Redmer, T.R. Mattsson, N. Nettelmann and M. French, Icarus 211, 798 (2011). [3] N. Nettelmann, K. Wang, J. J. Fortney, S. Hamel, S. Yellamilli, M. Bethkenhagen and R. Redmer, Icarus 275, 107 (2016).

  1. Quantitation of proteins using a dye-metal-based colorimetric protein assay.

    PubMed

    Antharavally, Babu S; Mallia, Krishna A; Rangaraj, Priya; Haney, Paul; Bell, Peter A

    2009-02-15

    We describe a dye-metal (polyhydroxybenzenesulfonephthalein-type dye and a transition metal) complex-based total protein determination method. The binding of the complex to protein causes a shift in the absorption maximum of the dye-metal complex from 450 to 660 nm. The dye-metal complex has a reddish brown color that changes to green on binding to protein. The color produced from this reaction is stable and increases in a proportional manner over a broad range of protein concentrations. The new Pierce 660 nm Protein Assay is very reproducible, rapid, and more linear compared with the Coomassie dye-based Bradford assay. The assay reagent is room temperature stable, and the assay is a simple and convenient mix-and-read format. The assay has a moderate protein-to-protein variation and is compatible with most detergents, reducing agents, and other commonly used reagents. This is an added advantage for researchers needing to determine protein concentrations in samples containing both detergents and reducing agents.

  2. On a generating mechanism for Yanai waves and the 25-day oscillation

    NASA Technical Reports Server (NTRS)

    Kelly, Brian G.; Meyers, Steven D.; O'Brien, James J.

    1995-01-01

    A spectral Chebyshev-collocation method applied to the linear, 1.5 layer reduced-gravity ocean model equations is used to study the dynamics of Yanai (or mixed Rossby-gravity) wave packets. These are of interest because of the observations of equatorial instability waves (which have the characteristics of Yanai waves) and their role in the momentum and heat budgets in the tropics. A series of experiments is performed to investigate the generation of the waves by simple cross-equatorial wind stress forcings in various configurations and the influence of a western boundary on the waves. They may be generated in the interior ocean as well as from a western boundary. The observations from all the oceans indicate that the waves have a preferential period and wavelength of around 25 days and 1000 km respectively. These properties are also seen in the model results and a plausible explanation is provided as being due to the dispersive properties of Yanai waves.

  3. Sol-Gel Synthesis of Carbon Xerogel-ZnO Composite for Detection of Catechol

    PubMed Central

    Li, Dawei; Zang, Jun; Zhang, Jin; Ao, Kelong; Wang, Qingqing; Dong, Quanfeng; Wei, Qufu

    2016-01-01

    Carbon xerogel-zinc oxide (CXZnO) composites were synthesized by a simple method of sol-gel condensation polymerization of formaldehyde and resorcinol solution containing zinc salt followed by drying and thermal treatment. ZnO nanoparticles were observed to be evenly dispersed on the surfaces of the carbon xerogel microspheres. The as-prepared CXZnO composites were mixed with laccase (Lac) and Nafion to obtain a mixture solution, which was further modified on an electrode surface to construct a novel biosensing platform. Finally, the prepared electrochemical biosensor was employed to detect the environmental pollutant, catechol. The analysis result was satisfactory, the sensor showed excellent electrocatalysis towards catechol with high sensitivity (31.2 µA·mM−1), a low detection limit (2.17 µM), and a wide linear range (6.91–453 µM). Moreover, the biosensor also displayed favorable repeatability, reproducibility, selectivity, and stability besides being successfully used in the trace detection of catechol existing in lake water environments. PMID:28773407

  4. A method for matching the refractive index and kinematic viscosity of a blood analog for flow visualization in hydraulic cardiovascular models.

    PubMed

    Nguyen, T T; Biadillah, Y; Mongrain, R; Brunette, J; Tardif, J C; Bertrand, O F

    2004-08-01

    In this work, we propose a simple method to simultaneously match the refractive index and kinematic viscosity of a circulating blood analog in hydraulic models for optical flow measurement techniques (PIV, PMFV, LDA, and LIF). The method is based on the determination of the volumetric proportions and temperature at which two transparent miscible liquids should be mixed to reproduce the targeted fluid characteristics. The temperature dependence models are a linear relation for the refractive index and an Arrhenius relation for the dynamic viscosity of each liquid. Then the dynamic viscosity of the mixture is represented with a Grunberg-Nissan model of type 1. Experimental tests for acrylic and blood viscosity were found to be in very good agreement with the targeted values (measured refractive index of 1.486 and kinematic viscosity of 3.454 milli-m2/s with targeted values of 1.47 and 3.300 milli-m2/s).

  5. Temperature-dependent microindentation data of an epoxy composition in the glassy region

    NASA Astrophysics Data System (ADS)

    Minster, Jiří; Králík, Vlastimil

    2015-02-01

    The short-term instrumented microindentation technique was applied for assessing the influence of temperature in the glassy region on the time-dependent mechanical properties of an average epoxy resin mix near to its native state. Linear viscoelasticity theory with the assumption of time-independent Poisson ratio value forms the basis for processing the experimental results. The sharp standard Berkovich indenter was used to measure the local mechanical properties at temperatures 20, 24, 28, and 35 °C. The short-term viscoelastic compliance histories were defined by the Kohlrausch-Williams-Watts double exponential function. The findings suggest that depth-sensing indentation data of thermorheologically simple materials influenced by different temperatures in the glassy region can also be used, through the time-temperature superposition, to extract viscoelastic response functions accurately. This statement is supported by the comparison of the viscoelastic compliance master curve of the tested material with data derived from standard macro creep measurements under pressure on the material in a conformable state.

  6. Precombination Cloud Collapse and Baryonic Dark Matter

    NASA Technical Reports Server (NTRS)

    Hogan, Craig J.

    1993-01-01

    A simple spherical model of dense baryon clouds in the hot big bang 'strongly nonlinear primordial isocurvature baryon fluctuations' is reviewed and used to describe the dependence of cloud behavior on the model parameters, baryon mass, and initial over-density. Gravitational collapse of clouds before and during recombination is considered including radiation diffusion and trapping, remnant type and mass, and effects on linear large-scale fluctuation modes. Sufficiently dense clouds collapse early into black holes with a minimum mass of approx. 1 solar mass, which behave dynamically like collisionless cold dark matter. Clouds below a critical over-density, however, delay collapse until recombination, remaining until then dynamically coupled to the radiation like ordinary diffuse baryons, and possibly producing remnants of other kinds and lower mass. The mean density in either type of baryonic remnant is unconstrained by observed element abundances. However, mixed or unmixed spatial variations in abundance may survive in the diffuse baryon and produce observable departures from standard predictions.

  7. Determination of polychlorinated biphenyls in milk samples by saponification-solid-phase microextraction.

    PubMed

    Llompart, M; Pazos, M; Landin, P; Cela, R

    2001-12-15

    A saponification-HSSPME procedure has been developed for the extraction of PCBs from milk samples. Saponification of the samples improves the PCB extraction efficiency and allows attaining lower background. A mixed-level fractional design has been used to optimize the sample preparation process. Five variables have been considered: extraction time, agitation, kind of microextraction fiber, concentration, and volume of NaOH aqueous solution. Also the kinetic of the process has been studied with the two fibers (100-microm PDMS and 65-microm PDMS-DVB) included in this study. Analyses were performed on a gas chromatograph equipped with an electron capture detector and a gas chromatograph coupled to a mass selective detector working in MS-MS mode. The proposed method is simple and rapid, and yields high sensitivity, with detection limits below 1 ng/mL, good linearity, and reproducibility. The method has been applied to liquid milk samples with different fat content covering the whole commercial range, and it has been validated with powdered milk certified reference material.

  8. A cohesive granular material with tunable elasticity

    PubMed Central

    Hemmerle, Arnaud; Schröter, Matthias; Goehring, Lucas

    2016-01-01

    By mixing glass beads with a curable polymer we create a well-defined cohesive granular medium, held together by solidified, and hence elastic, capillary bridges. This material has a geometry similar to a wet packing of beads, but with an additional control over the elasticity of the bonds holding the particles together. We show that its mechanical response can be varied over several orders of magnitude by adjusting the size and stiffness of the bridges, and the size of the particles. We also investigate its mechanism of failure under unconfined uniaxial compression in combination with in situ x-ray microtomography. We show that a broad linear-elastic regime ends at a limiting strain of about 8%, whatever the stiffness of the agglomerate, which corresponds to the beginning of shear failure. The possibility to finely tune the stiffness, size and shape of this simple material makes it an ideal model system for investigations on, for example, fracturing of porous rocks, seismology, or root growth in cohesive porous media. PMID:27774988

  9. A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis

    NASA Technical Reports Server (NTRS)

    Lane, John

    2009-01-01

    A simple 2D M N matrix involving sample preparation enables the microanalyst to peer below the noise floor of element percentages reported by the SEM/EDX (scanning electron microscopy/ energy dispersive x-ray) analysis, thus yielding more meaningful data. Using the example of a 2 3 sample set, there are M = 2 concentration levels of the original mix under test: 10 percent ilmenite (90 percent silica) and 20 percent ilmenite (80 percent silica). For each of these M samples, N = 3 separate SEM/EDX samples were drawn. In this test, ilmenite is the element of interest. By plotting the linear trend of the M sample s known concentration versus the average of the N samples, a much higher resolution of elemental analysis can be performed. The resulting trend also shows how the noise is affecting the data, and at what point (of smaller concentrations) is it impractical to try to extract any further useful data.

  10. Quantification of urinary zwitterionic organic acids using weak-anion exchange chromatography with tandem MS detection.

    PubMed

    Bishop, Michael Jason; Crow, Brian S; Kovalcik, Kasey D; George, Joe; Bralley, James A

    2007-04-01

    A rapid and accurate quantitative method was developed and validated for the analysis of four urinary organic acids with nitrogen containing functional groups, formiminoglutamic acid (FIGLU), pyroglutamic acid (PYRGLU), 5-hydroxyindoleacetic acid (5-HIAA), and 2-methylhippuric acid (2-METHIP) by liquid chromatography tandem mass spectrometry (LC/MS/MS). The chromatography was developed using a weak anion-exchange amino column that provided mixed-mode retention of the analytes. The elution gradient relied on changes in mobile phase pH over a concave gradient, without the use of counter-ions or concentrated salt buffers. A simple sample preparation was used, only requiring the dilution of urine prior to instrumental analysis. The method was validated based on linearity (r2>or=0.995), accuracy (85-115%), precision (C.V.<12%), sample preparation stability (

  11. A cohesive granular material with tunable elasticity.

    PubMed

    Hemmerle, Arnaud; Schröter, Matthias; Goehring, Lucas

    2016-10-24

    By mixing glass beads with a curable polymer we create a well-defined cohesive granular medium, held together by solidified, and hence elastic, capillary bridges. This material has a geometry similar to a wet packing of beads, but with an additional control over the elasticity of the bonds holding the particles together. We show that its mechanical response can be varied over several orders of magnitude by adjusting the size and stiffness of the bridges, and the size of the particles. We also investigate its mechanism of failure under unconfined uniaxial compression in combination with in situ x-ray microtomography. We show that a broad linear-elastic regime ends at a limiting strain of about 8%, whatever the stiffness of the agglomerate, which corresponds to the beginning of shear failure. The possibility to finely tune the stiffness, size and shape of this simple material makes it an ideal model system for investigations on, for example, fracturing of porous rocks, seismology, or root growth in cohesive porous media.

  12. Collective Behavior of Place and Non-place Neurons in the Hippocampal Network.

    PubMed

    Meshulam, Leenoy; Gauthier, Jeffrey L; Brody, Carlos D; Tank, David W; Bialek, William

    2017-12-06

    Discussions of the hippocampus often focus on place cells, but many neurons are not place cells in any given environment. Here we describe the collective activity in such mixed populations, treating place and non-place cells on the same footing. We start with optical imaging experiments on CA1 in mice as they run along a virtual linear track and use maximum entropy methods to approximate the distribution of patterns of activity in the population, matching the correlations between pairs of cells but otherwise assuming as little structure as possible. We find that these simple models accurately predict the activity of each neuron from the state of all the other neurons in the network, regardless of how well that neuron codes for position. Our results suggest that understanding the neural activity may require not only knowledge of the external variables modulating it but also of the internal network state. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Infrared spectra of seeded hydrogen clusters: (para-H2)N-N2O and (ortho-H2)N-N2O, N = 2-13.

    PubMed

    Tang, Jian; McKellar, A R W

    2005-09-15

    High-resolution infrared spectra of clusters containing para-H2 and/or ortho-H2 and a single nitrous oxide molecule are studied in the 2225-cm(-1) region of the upsilon1 fundamental band of N2O. The clusters are formed in pulsed supersonic jet expansions from a cooled nozzle and probed using a tunable infrared diode laser spectrometer. The simple symmetric rotor-type spectra generally show no resolved K structure, with prominent Q-branch features for ortho-H2 but not para-H2 clusters. The observed vibrational shifts and rotational constants are reported. There is no obvious indication of superfluid effects for para-H2 clusters up to N=13. Sharp transitions due to even larger clusters are observed, but no definite assignments are possible. Mixed (para-H2)N-(ortho-H2)M-N2O cluster line positions can be well predicted by linear interpolation between the corresponding transitions of the pure clusters.

  14. Switchable dual-wavelength SOA-based fiber laser with continuous tunability over the C-band at room-temperature.

    PubMed

    Ummy, M A; Madamopoulos, N; Razani, M; Hossain, A; Dorsinville, R

    2012-10-08

    We propose and demonstrate a simple compact, inexpensive, SOA-based, dual-wavelength tunable fiber laser, that can potentially be used for photoconductive mixing and generation of waves in the microwave and THz regions. A C-band semiconductor optical amplifier (SOA) is placed inside a linear cavity with two Sagnac loop mirrors at its either ends, which act as both reflectors and output ports. The selectivity of dual wavelengths and the tunability of the wavelength difference (Δλ) between them is accomplished by placing a narrow bandwidth (e.g., 0.3 nm) tunable thin film-based filter and a fiber Bragg grating (with bandwidth 0.28 nm) inside the loop mirror that operates as the output port. A total output power of + 6.9 dBm for the two wavelengths is measured and the potential for higher output powers is discussed. Optical power and wavelength stability are measured at 0.33 dB and 0.014 nm, respectively.

  15. Heteroleptic metallosupramolecular racks, rectangles, and trigonal prisms: stoichiometry-controlled reversible interconversion.

    PubMed

    Neogi, Subhadip; Lorenz, Yvonne; Engeser, Marianne; Samanta, Debabrata; Schmittel, Michael

    2013-06-17

    A simple approach toward preparation of heteroleptic two-dimensional (2D) rectangles and three-dimensional (3D) triangular prisms is described utilizing the HETPYP (HETeroleptic PYridyl and Phenanthroline metal complexes) concept. By mixing metal-loaded linear bisphenanthrolines of varying lengths with diverse (multi)pyridine (py) ligands in a proper ratio, six different self-assembled architectures arise cleanly and spontaneously in the absence of any template. They are characterized by (1)H and DOSY NMR, ESI-FT-ICR mass spectrometry as well as by Job plots and UV-vis titrations. Density functional theory (DFT) computations provide information about each structure. A stoichiometry-controlled supramolecule-to-supramolecule interconversion based on the relative amounts of metal bisphenanthroline and bipyridine forces the rectangular assembly to reorganize to a rack architecture and back to the rectangle, as clearly supported by variable temperature and DOSY NMR as well as dynamic light scattering data. The highly dynamic nature of the assemblies represents a promising starting point for constitutional dynamic materials.

  16. Determination of the wine preservative sulphur dioxide with cyclic voltammetry using inkjet printed electrodes.

    PubMed

    Schneider, Marion; Türke, Alexander; Fischer, Wolf-Joachim; Kilmartin, Paul A

    2014-09-15

    During winemaking sulphur dioxide is added to prevent undesirable reactions. However, concerns over the harmful effects of sulphites have led to legal limits being placed upon such additives. There is thus a need for simple and selective determinations of sulphur dioxide in wine, especially during winemaking. The simultaneous detection of polyphenols and sulphur dioxide, using cyclic voltammetry at inert electrodes is challenging due to close oxidation potentials. In the present study, inkjet printed electrodes were developed with a suitable voltammetric signal on which the polyphenol oxidation is suppressed and the oxidation peak height for sulphur dioxide corresponds linearly to the concentration. Different types of working electrodes were printed. Electrodes consisting of gold nanoparticles mixed with silver showed the highest sensitivity towards sulphur dioxide. Low cost production of the sensor elements and ultra fast determination of sulphur dioxide by cyclic voltammetry makes this technique very promising for the wine industry. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Method for universal detection of two-photon polarization entanglement

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Horodecki, Paweł; Lemr, Karel; Miranowicz, Adam; Życzkowski, Karol

    2015-03-01

    Detecting and quantifying quantum entanglement of a given unknown state poses problems that are fundamentally important for quantum information processing. Surprisingly, no direct (i.e., without quantum tomography) universal experimental implementation of a necessary and sufficient test of entanglement has been designed even for a general two-qubit state. Here we propose an experimental method for detecting a collective universal witness, which is a necessary and sufficient test of two-photon polarization entanglement. It allows us to detect entanglement for any two-qubit mixed state and to establish tight upper and lower bounds on its amount. A different element of this method is the sequential character of its main components, which allows us to obtain relatively complicated information about quantum correlations with the help of simple linear-optical elements. As such, this proposal realizes a universal two-qubit entanglement test within the present state of the art of quantum optics. We show the optimality of our setup with respect to the minimal number of measured quantities.

  18. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    PubMed

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  19. A new simple form of quark mixing matrix

    NASA Astrophysics Data System (ADS)

    Qin, Nan; Ma, Bo-Qiang

    2011-01-01

    Although different parametrizations of quark mixing matrix are mathematically equivalent, the consequences of experimental analysis may be distinct. Based on the triminimal expansion of Kobayashi-Maskawa matrix around the unit matrix, we propose a new simple parametrization. Compared with the Wolfenstein parametrization, we find that the new form is not only consistent with the original one in the hierarchical structure, but also more convenient for numerical analysis and measurement of the CP-violating phase. By discussing the relation between our new form and the unitarity boomerang, we point out that along with the unitarity boomerang, this new parametrization is useful in hunting for new physics.

  20. Proxy case mix measures for nursing homes.

    PubMed

    Cyr, A B

    1983-01-01

    Nursing home case mix measures are needed for the same purposes that spurred the intensive development of case mix measures for hospitals: management and planning decisions, organizational performance research, and reimbursement policy analysis. This paper develops and validates a pair of complementary measures that are simple to compute, are easy to interpret, and use generally available data. They are not, however, definitive. A secondary purpose of this paper is thus to galvanize the development of data bases that will give rise to superior case mix measures for nursing homes.

  1. Guidance for the utility of linear models in meta-analysis of genetic association studies of binary phenotypes.

    PubMed

    Cook, James P; Mahajan, Anubha; Morris, Andrew P

    2017-02-01

    Linear mixed models are increasingly used for the analysis of genome-wide association studies (GWAS) of binary phenotypes because they can efficiently and robustly account for population stratification and relatedness through inclusion of random effects for a genetic relationship matrix. However, the utility of linear (mixed) models in the context of meta-analysis of GWAS of binary phenotypes has not been previously explored. In this investigation, we present simulations to compare the performance of linear and logistic regression models under alternative weighting schemes in a fixed-effects meta-analysis framework, considering designs that incorporate variable case-control imbalance, confounding factors and population stratification. Our results demonstrate that linear models can be used for meta-analysis of GWAS of binary phenotypes, without loss of power, even in the presence of extreme case-control imbalance, provided that one of the following schemes is used: (i) effective sample size weighting of Z-scores or (ii) inverse-variance weighting of allelic effect sizes after conversion onto the log-odds scale. Our conclusions thus provide essential recommendations for the development of robust protocols for meta-analysis of binary phenotypes with linear models.

  2. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  3. Low-sensitivity, low-bounce, high-linearity current-controlled oscillator suitable for single-supply mixed-mode instrumentation system.

    PubMed

    Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong

    2009-02-01

    A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.

  4. Consensus Algorithms for Networks of Systems with Second- and Higher-Order Dynamics

    NASA Astrophysics Data System (ADS)

    Fruhnert, Michael

    This thesis considers homogeneous networks of linear systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilizable. We show that, in continuous-time, consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. For networks of continuous-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback. For networks of discrete-time second-order systems, we provide a new and simple derivation of the conditions for a second-order polynomials with complex coefficients to be Schur. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. We show that consensus can always be achieved for marginally stable systems and discretized systems. Simple conditions for consensus achieving controllers are obtained when the Laplacian eigenvalues are all real. For networks of continuous-time time-variant higher-order systems, we show that uniform consensus can always be achieved if systems are quadratically stabilizable. In this case, we provide a simple condition to obtain a linear feedback control. For networks of discrete-time higher-order systems, we show that constant gains can be chosen such that consensus is achieved for a variety of network topologies. First, we develop simple results for networks of time-invariant systems and networks of time-variant systems that are given in controllable canonical form. Second, we formulate the problem in terms of Linear Matrix Inequalities (LMIs). The condition found simplifies the design process and avoids the parallel solution of multiple LMIs. The result yields a modified Algebraic Riccati Equation (ARE) for which we present an equivalent LMI condition.

  5. Simple Tidal Prism Models Revisited

    NASA Astrophysics Data System (ADS)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  6. Linear Legendrian curves in T(3)

    NASA Astrophysics Data System (ADS)

    Ghiggini, Paolo

    2006-05-01

    Using convex surfaces and Kanda's classification theorem, we classify Legendrian isotopy classes of Legendrian linear curves in all tight contact structures on T(3) . Some of the knot types considered in this paper provide new examples of non transversally simple knot types.

  7. A simple route to improve rate performance of LiFePO4/reduced graphene oxide composite cathode by adding Mg2+ via mechanical mixing

    NASA Astrophysics Data System (ADS)

    Huang, Yuan; Liu, Hao; Gong, Li; Hou, Yanglong; Li, Quan

    2017-04-01

    Introducing Mg2+ to LiFePO4 and reduced graphene oxide composite via mechanical mixing and annealing leads to largely improved rate performance of the cathode (e.g. ∼78 mA h g-1 at 20 C for LiFePO4 and reduced graphene oxide composite with Mg2+ introduction vs. ∼37 mA h g-1 at 20 C for LiFePO4 and reduced graphene oxide composite). X-ray photoelectron spectroscopy unravels that the enhanced reduction of Fe2+ to Fe0 occurs in the simultaneous presence of Mg2+ and reduced graphene oxide, which is beneficial for the rate capability of cathode. The simple fabrication process provides a simple and effective means to improve the rate performance of the LiFePO4 and reduced graphene oxide composite cathode.

  8. An in-Situ Chemical Analyzer for the Determination of Trace Ammonia in Natural Waters

    NASA Astrophysics Data System (ADS)

    Amornthammarong, N.; Ortner, P. B.; Hendee, J. C.

    2014-12-01

    In recent decades chemists have devoted a considerable effort to automating classical wet chemistry. The instruments manufactured for analysis of a large number of samples can be categorized into two main groups—batch and continuous flow analyzers. Our technique, autonomous batch analyzer (ABA), takes advantages of previously described batch analysis and continuous flow analysis. With its simpler design, ABA is robust, flexible, inexpensive, and requires minimal maintenance. ABA achieves complete mixing of sample with reagents using a syringe and a simple mixing chamber. The system can autonomously produce a calibration curve by auto-diluting a single stock standard solution. In addition it incorporates a pre-filtering subsystem enabling measurements in turbid, sediment-laden waters. Over the typical range for ammonia in marine waters (0-10 µM), the response is linear (r2 = 0.9930) with a limit of detection (S/N ratio > 3) of 10 nM. The working range for marine waters is 0.05-10 µM. Repeatability is 0.3% (n = 10) at an ammonia level of 2 μM. Results from automated operation in 15 min cycles over 16 days had good overall precision (RSD = 3%, n = 660). The system was field tested at three shallow South Florida sites, a tidal pond and the Indian River Lagoon, FL. Diurnal cycles and possibly a tidal influence were expressed in the concentration variability observed.

  9. A simple molecular mechanics integrator in mixed rigid body and dihedral angle space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitalis, Andreas, E-mail: a.vitalis@bioc.uzh.ch; Pappu, Rohit V.

    2014-07-21

    We propose a numerical scheme to integrate equations of motion in a mixed space of rigid-body and dihedral angle coordinates. The focus of the presentation is biomolecular systems and the framework is applicable to polymers with tree-like topology. By approximating the effective mass matrix as diagonal and lumping all bias torques into the time dependencies of the diagonal elements, we take advantage of the formal decoupling of individual equations of motion. We impose energy conservation independently for every degree of freedom and this is used to derive a numerical integration scheme. The cost of all auxiliary operations is linear inmore » the number of atoms. By coupling the scheme to one of two popular thermostats, we extend the method to sample constant temperature ensembles. We demonstrate that the integrator of choice yields satisfactory stability and is free of mass-metric tensor artifacts, which is expected by construction of the algorithm. Two fundamentally different systems, viz., liquid water and an α-helical peptide in a continuum solvent are used to establish the applicability of our method to a wide range of problems. The resultant constant temperature ensembles are shown to be thermodynamically accurate. The latter relies on detailed, quantitative comparisons to data from reference sampling schemes operating on exactly the same sets of degrees of freedom.« less

  10. Investigating organic molecules responsible of auxin-like activity of humic acid fraction extracted from vermicompost.

    PubMed

    Scaglia, Barbara; Nunes, Ramom Rachide; Rezende, Maria Olímpia Oliveira; Tambone, Fulvia; Adani, Fabrizio

    2016-08-15

    This work studied the auxin-like activity of humic acids (HA) obtained from vermicomposts produced using leather wastes plus cattle dung at different maturation stages (fresh, stable and mature). Bioassays were performed by testing HA concentrations in the range of 100-6000mgcarbonL(-1). (13)C CPMAS-NMR and GC-MS instrumental methods were used to assess the effect of biological processes and starting organic mixtures on HA composition. Not all HAs showed IAA-like activity and in general, IAA-like activity increased with the length of the vermicomposting process. The presence of leather wastes was not necessary to produce the auxin-like activity of HA, since HA extracted from a mix of cattle manure and sawdust, where no leather waste was added, showed IAA-like activity as well. CPMAS (13)CNMR revealed that HAs were similar independently of the mix used and that the humification process involved the increasing concentration of pre-existing alkali soluble fractions in the biomass. GC/MS allowed the identification of the molecules involved in IAA-like effects: carboxylic acids and amino acids. The concentration of active molecules, rather than their simple presence in HA, determined the bio-stimulating effect, and a good linear regression between auxin-like activity and active stimulating molecules concentration was found (R(2)=-0.85; p<0.01, n=6). Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A continuum damage model for delaminations in laminated composites

    NASA Astrophysics Data System (ADS)

    Zou, Z.; Reid, S. R.; Li, S.

    2003-02-01

    Delamination, a typical mode of interfacial damage in laminated composites, has been considered in the context of continuum damage mechanics in this paper. Interfaces where delaminations could occur are introduced between the constituent layers. A simple but appropriate continuum damage representation is proposed. A single scalar damage parameter is employed and the degradation of the interface stiffness is established. Use has been made of the concept of a damage surface to derive the damage evolution law. The damage surface is constructed so that it combines the conventional stress-based and fracture-mechanics-based failure criteria which take account of mode interaction in mixed-mode delamination problems. The damage surface shrinks as damage develops and leads to a softening interfacial constitutive law. By adjusting the shrinkage rate of the damage surface, various interfacial constitutive laws found in the literature can be reproduced. An incremental interfacial constitutive law is also derived for use in damage analysis of laminated composites, which is a non-linear problem in nature. Numerical predictions for problems involving a DCB specimen under pure mode I delamination and mixed-mode delamination in a split beam are in good agreement with available experimental data or analytical solutions. The model has also been applied to the prediction of the failure strength of overlap ply-blocking specimens. The results have been compared with available experimental and alternative theoretical ones and discussed fully.

  12. The MHOST finite element program: 3-D inelastic analysis methods for hot section components. Volume 1: Theoretical manual

    NASA Technical Reports Server (NTRS)

    Nakazawa, Shohei

    1991-01-01

    Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.

  13. A neural network approach to job-shop scheduling.

    PubMed

    Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E

    1991-01-01

    A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.

  14. On the propagation of particulate gravity currents in circular and semi-circular channels partially filled with homogeneous or stratified ambient fluid

    NASA Astrophysics Data System (ADS)

    Zemach, T.; Chiapponi, L.; Petrolo, D.; Ungarish, M.; Longo, S.; Di Federico, V.

    2017-10-01

    We present a combined theoretical-experimental investigation of particle-driven gravity currents advancing in circular cross section channels in the high-Reynolds number Boussinesq regime; the ambient fluid is either homogeneous or linearly stratified. The predictions of the theoretical model are compared with experiments performed in lock-release configuration; experiments were performed with conditions of both full-depth and partial-depth locks. Two different particles were used for the turbidity current, and the full range 0 ≤S ≤1 of the stratification parameter was explored (S = 0 corresponds to the homogeneous case and S = 1 when the density of the ambient fluid and of the current are equal at the bottom). In addition, a few saline gravity currents were tested for comparison. The results show good agreement for the full-depth configuration, with the initial depth of the current in the lock being equal to the depth of the ambient fluid. The agreement is less good for the partial-depth cases and is improved by the introduction of a simple adjustment coefficient for the Froude number at the front of the current and accounting for dissipation. The general parameter dependencies and behaviour of the current, although influenced by many factors (e.g., mixing and internal waves), are well predicted by the relatively simple model.

  15. Improving Students’ Science Process Skills through Simple Computer Simulations on Linear Motion Conceptions

    NASA Astrophysics Data System (ADS)

    Siahaan, P.; Suryani, A.; Kaniawati, I.; Suhendi, E.; Samsudin, A.

    2017-02-01

    The purpose of this research is to identify the development of students’ science process skills (SPS) on linear motion concept by utilizing simple computer simulation. In order to simplify the learning process, the concept is able to be divided into three sub-concepts: 1) the definition of motion, 2) the uniform linear motion and 3) the uniformly accelerated motion. This research was administered via pre-experimental method with one group pretest-posttest design. The respondents which were involved in this research were 23 students of seventh grade in one of junior high schools in Bandung City. The improving process of students’ science process skill is examined based on normalized gain analysis from pretest and posttest scores for all sub-concepts. The result of this research shows that students’ science process skills are dramatically improved by 47% (moderate) on observation skill; 43% (moderate) on summarizing skill, 70% (high) on prediction skill, 44% (moderate) on communication skill and 49% (moderate) on classification skill. These results clarify that the utilizing simple computer simulations in physics learning is be able to improve overall science skills at moderate level.

  16. Direct localization of poles of a meromorphic function from measurements on an incomplete boundary

    NASA Astrophysics Data System (ADS)

    Nara, Takaaki; Ando, Shigeru

    2010-01-01

    This paper proposes an algebraic method to reconstruct the positions of multiple poles in a meromorphic function field from measurements on an arbitrary simple arc in it. A novel issue is the exactness of the algorithm depending on whether the arc is open or closed, and whether it encloses or does not enclose the poles. We first obtain a differential equation that can equivalently determine the meromorphic function field. From it, we derive linear equations that relate the elementary symmetric polynomials of the pole positions to weighted integrals of the field along the simple arc and end-point terms of the arc when it is an open one. Eliminating the end-point terms based on an appropriate choice of weighting functions and a combination of the linear equations, we obtain a simple system of linear equations for solving the elementary symmetric polynomials. We also show that our algorithm can be applied to a 2D electric impedance tomography problem. The effects of the proximity of the poles, the number of measurements and noise on the localization accuracy are numerically examined.

  17. A Simple Attitude Control of Quadrotor Helicopter Based on Ziegler-Nichols Rules for Tuning PD Parameters

    PubMed Central

    He, ZeFang

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879

  18. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  19. Analysis of Iron in Lawn Fertilizer: A Sampling Study

    ERIC Educational Resources Information Center

    Jeannot, Michael A.

    2006-01-01

    An experiment is described which uses a real-world sample of lawn fertilizer in a simple exercise to illustrate problems associated with the sampling step of a chemical analysis. A mixed-particle fertilizer containing discrete particles of iron oxide (magnetite, Fe[subscript 3]O[subscript 4]) mixed with other particles provides an excellent…

  20. MODELING NON-PRECIPITATING CUMULUS CLOUDS AS FLOW-THROUGH-REACTOR TRANSFORMER AND VENTING TRANSPORTER OF MIXED LAYER POLLUTANTS

    EPA Science Inventory

    A simple diagnostic model of cumulus convective clouds is developed and used in a sensitivity study to examine the extent to which the rate of change of mixed and cloud layer pollutant concentration is influenced by vertical transport and chemical transformation processes occurri...

  1. The Quantitative Determination of Food Dyes in Powdered Drink Mixes: A High School or General Science Experiment

    ERIC Educational Resources Information Center

    Sigmann, Samuella B.; Wheeler, Dale E.

    2004-01-01

    The development of a simple spectro photometric method to quantitatively determine the quantity of FD&C color additives present in powdered drink mixes, are focused by the investigations. Samples containing single dyes of binary mixtures of dyes can be analyzed using this method.

  2. Comparison of linear, skewed-linear, and proportional hazard models for the analysis of lambing interval in Ripollesa ewes.

    PubMed

    Casellas, J; Bach, R

    2012-06-01

    Lambing interval is a relevant reproductive indicator for sheep populations under continuous mating systems, although there is a shortage of selection programs accounting for this trait in the sheep industry. Both the historical assumption of small genetic background and its unorthodox distribution pattern have limited its implementation as a breeding objective. In this manuscript, statistical performances of 3 alternative parametrizations [i.e., symmetric Gaussian mixed linear (GML) model, skew-Gaussian mixed linear (SGML) model, and piecewise Weibull proportional hazard (PWPH) model] have been compared to elucidate the preferred methodology to handle lambing interval data. More specifically, flock-by-flock analyses were performed on 31,986 lambing interval records (257.3 ± 0.2 d) from 6 purebred Ripollesa flocks. Model performances were compared in terms of deviance information criterion (DIC) and Bayes factor (BF). For all flocks, PWPH models were clearly preferred; they generated a reduction of 1,900 or more DIC units and provided BF estimates larger than 100 (i.e., PWPH models against linear models). These differences were reduced when comparing PWPH models with different number of change points for the baseline hazard function. In 4 flocks, only 2 change points were required to minimize the DIC, whereas 4 and 6 change points were needed for the 2 remaining flocks. These differences demonstrated a remarkable degree of heterogeneity across sheep flocks that must be properly accounted for in genetic evaluation models to avoid statistical biases and suboptimal genetic trends. Within this context, all 6 Ripollesa flocks revealed substantial genetic background for lambing interval with heritabilities ranging between 0.13 and 0.19. This study provides the first evidence of the suitability of PWPH models for lambing interval analysis, clearly discarding previous parametrizations focused on mixed linear models.

  3. Bimodule structure of the mixed tensor product over Uq sℓ (2 | 1) and quantum walled Brauer algebra

    NASA Astrophysics Data System (ADS)

    Bulgakova, D. V.; Kiselev, A. M.; Tipunin, I. Yu.

    2018-03-01

    We study a mixed tensor product 3⊗m ⊗3 ‾ ⊗ n of the three-dimensional fundamental representations of the Hopf algebra Uq sℓ (2 | 1), whenever q is not a root of unity. Formulas for the decomposition of tensor products of any simple and projective Uq sℓ (2 | 1)-module with the generating modules 3 and 3 ‾ are obtained. The centralizer of Uq sℓ (2 | 1) on the mixed tensor product is calculated. It is shown to be the quotient Xm,n of the quantum walled Brauer algebra qw Bm,n. The structure of projective modules over Xm,n is written down explicitly. It is known that the walled Brauer algebras form an infinite tower. We have calculated the corresponding restriction functors on simple and projective modules over Xm,n. This result forms a crucial step in decomposition of the mixed tensor product as a bimodule over Xm,n ⊠Uq sℓ (2 | 1). We give an explicit bimodule structure for all m , n.

  4. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    PubMed

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  5. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    PubMed

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  6. Atomistic Structure and Dynamics of the Solvation Shell Formed by Organic Carbonates around Lithium Ions via Infrared Spectroscopies

    NASA Astrophysics Data System (ADS)

    Kuroda, Daniel; Fufler, Kristen

    Lithium-ion batteries have become ubiquitous to the portable energy storage industry, but efficiency issues still remain. Currently, most technological and scientific efforts are focused on the electrodes with little attention on the electrolyte. For example, simple fundamental questions about the lithium ion solvation shell composition in commercially used electrolytes have not been answered. Using a combination of linear and non-linear IR spectroscopies and theoretical calculations, we have carried out a thorough investigation of the solvation structure and dynamics of the lithium ion in various linear and cyclic carbonates at common battery electrolyte concentrations. Our studies show that carbonates coordinate the lithium ion tetrahedrally. They also reveal that linear and cyclic carbonates have contrasting dynamics in which cyclic carbonates present the most ordered structure. Finally, our experiments demonstrate that simple structural modifications in the linear carbonates impact significantly the microscopic interactions of the system. The stark differences in the solvation structure and dynamics among different carbonates reveal previously unknown details about the molecular level picture of these systems.

  7. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  8. A Simple Inquiry-Based Lab for Teaching Osmosis

    ERIC Educational Resources Information Center

    Taylor, John R.

    2014-01-01

    This simple inquiry-based lab was designed to teach the principle of osmosis while also providing an experience for students to use the skills and practices commonly found in science. Students first design their own experiment using very basic equipment and supplies, which generally results in mixed, but mostly poor, outcomes. Classroom "talk…

  9. Differentiating Tumor Progression from Pseudoprogression in Patients with Glioblastomas Using Diffusion Tensor Imaging and Dynamic Susceptibility Contrast MRI.

    PubMed

    Wang, S; Martinez-Lage, M; Sakai, Y; Chawla, S; Kim, S G; Alonso-Basanta, M; Lustig, R A; Brem, S; Mohan, S; Wolf, R L; Desai, A; Poptani, H

    2016-01-01

    Early assessment of treatment response is critical in patients with glioblastomas. A combination of DTI and DSC perfusion imaging parameters was evaluated to distinguish glioblastomas with true progression from mixed response and pseudoprogression. Forty-one patients with glioblastomas exhibiting enhancing lesions within 6 months after completion of chemoradiation therapy were retrospectively studied. All patients underwent surgery after MR imaging and were histologically classified as having true progression (>75% tumor), mixed response (25%-75% tumor), or pseudoprogression (<25% tumor). Mean diffusivity, fractional anisotropy, linear anisotropy coefficient, planar anisotropy coefficient, spheric anisotropy coefficient, and maximum relative cerebral blood volume values were measured from the enhancing tissue. A multivariate logistic regression analysis was used to determine the best model for classification of true progression from mixed response or pseudoprogression. Significantly elevated maximum relative cerebral blood volume, fractional anisotropy, linear anisotropy coefficient, and planar anisotropy coefficient and decreased spheric anisotropy coefficient were observed in true progression compared with pseudoprogression (P < .05). There were also significant differences in maximum relative cerebral blood volume, fractional anisotropy, planar anisotropy coefficient, and spheric anisotropy coefficient measurements between mixed response and true progression groups. The best model to distinguish true progression from non-true progression (pseudoprogression and mixed) consisted of fractional anisotropy, linear anisotropy coefficient, and maximum relative cerebral blood volume, resulting in an area under the curve of 0.905. This model also differentiated true progression from mixed response with an area under the curve of 0.901. A combination of fractional anisotropy and maximum relative cerebral blood volume differentiated pseudoprogression from nonpseudoprogression (true progression and mixed) with an area under the curve of 0.807. DTI and DSC perfusion imaging can improve accuracy in assessing treatment response and may aid in individualized treatment of patients with glioblastomas. © 2016 by American Journal of Neuroradiology.

  10. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  11. Sensitivity of the ocean overturning circulation to wind and mixing: theoretical scalings and global ocean models

    NASA Astrophysics Data System (ADS)

    Nikurashin, Maxim; Gunn, Andrew

    2017-04-01

    The meridional overturning circulation (MOC) is a planetary-scale oceanic flow which is of direct importance to the climate system: it transports heat meridionally and regulates the exchange of CO2 with the atmosphere. The MOC is forced by wind and heat and freshwater fluxes at the surface and turbulent mixing in the ocean interior. A number of conceptual theories for the sensitivity of the MOC to changes in forcing have recently been developed and tested with idealized numerical models. However, the skill of the simple conceptual theories to describe the MOC simulated with higher complexity global models remains largely unknown. In this study, we present a systematic comparison of theoretical and modelled sensitivity of the MOC and associated deep ocean stratification to vertical mixing and southern hemisphere westerlies. The results show that theories that simplify the ocean into a single-basin, zonally-symmetric box are generally in a good agreement with a realistic, global ocean circulation model. Some disagreement occurs in the abyssal ocean, where complex bottom topography is not taken into account by simple theories. Distinct regimes, where the MOC has a different sensitivity to wind or mixing, as predicted by simple theories, are also clearly shown by the global ocean model. The sensitivity of the Indo-Pacific, Atlantic, and global basins is analysed separately to validate the conceptual understanding of the upper and lower overturning cells in the theory.

  12. A green vehicle routing problem with customer satisfaction criteria

    NASA Astrophysics Data System (ADS)

    Afshar-Bakeshloo, M.; Mehrabi, A.; Safari, H.; Maleki, M.; Jolai, F.

    2016-12-01

    This paper develops an MILP model, named Satisfactory-Green Vehicle Routing Problem. It consists of routing a heterogeneous fleet of vehicles in order to serve a set of customers within predefined time windows. In this model in addition to the traditional objective of the VRP, both the pollution and customers' satisfaction have been taken into account. Meanwhile, the introduced model prepares an effective dashboard for decision-makers that determines appropriate routes, the best mixed fleet, speed and idle time of vehicles. Additionally, some new factors evaluate the greening of each decision based on three criteria. This model applies piecewise linear functions (PLFs) to linearize a nonlinear fuzzy interval for incorporating customers' satisfaction into other linear objectives. We have presented a mixed integer linear programming formulation for the S-GVRP. This model enriches managerial insights by providing trade-offs between customers' satisfaction, total costs and emission levels. Finally, we have provided a numerical study for showing the applicability of the model.

  13. INCORPORATING CONCENTRATION DEPENDENCE IN STABLE ISOTOPE MIXING MODELS

    EPA Science Inventory

    Stable isotopes are frequently used to quantify the contributions of multiple sources to a mixture; e.g., C and N isotopic signatures can be used to determine the fraction of three food sources in a consumer's diet. The standard dual isotope, three source linear mixing model ass...

  14. Superradiance Effects in the Linear and Nonlinear Optical Response of Quantum Dot Molecules

    NASA Astrophysics Data System (ADS)

    Sitek, A.; Machnikowski, P.

    2008-11-01

    We calculate the linear optical response from a single quantum dot molecule and the nonlinear, four-wave-mixing response from an inhomogeneously broadened ensemble of such molecules. We show that both optical signals are affected by the coupling-dependent superradiance effect and by optical interference between the two polarizations. As a result, the linear and nonlinear responses are not identical.

  15. Multi-temperature state-dependent equivalent circuit discharge model for lithium-sulfur batteries

    NASA Astrophysics Data System (ADS)

    Propp, Karsten; Marinescu, Monica; Auger, Daniel J.; O'Neill, Laura; Fotouhi, Abbas; Somasundaram, Karthik; Offer, Gregory J.; Minton, Geraint; Longo, Stefano; Wild, Mark; Knap, Vaclav

    2016-10-01

    Lithium-sulfur (Li-S) batteries are described extensively in the literature, but existing computational models aimed at scientific understanding are too complex for use in applications such as battery management. Computationally simple models are vital for exploitation. This paper proposes a non-linear state-of-charge dependent Li-S equivalent circuit network (ECN) model for a Li-S cell under discharge. Li-S batteries are fundamentally different to Li-ion batteries, and require chemistry-specific models. A new Li-S model is obtained using a 'behavioural' interpretation of the ECN model; as Li-S exhibits a 'steep' open-circuit voltage (OCV) profile at high states-of-charge, identification methods are designed to take into account OCV changes during current pulses. The prediction-error minimization technique is used. The model is parameterized from laboratory experiments using a mixed-size current pulse profile at four temperatures from 10 °C to 50 °C, giving linearized ECN parameters for a range of states-of-charge, currents and temperatures. These are used to create a nonlinear polynomial-based battery model suitable for use in a battery management system. When the model is used to predict the behaviour of a validation data set representing an automotive NEDC driving cycle, the terminal voltage predictions are judged accurate with a root mean square error of 32 mV.

  16. Correlation of Normal Gravity Mixed Convection Blowoff Limits with Microgravity Forced Flow Blowoff Limits

    NASA Technical Reports Server (NTRS)

    Marcum, Jeremy W.; Olson, Sandra L.; Ferkul, Paul V.

    2016-01-01

    The axisymmetric rod geometry in upward axial stagnation flow provides a simple way to measure normal gravity blowoff limits to compare with microgravity Burning and Suppression of Solids - II (BASS-II) results recently obtained aboard the International Space Station. This testing utilized the same BASS-II concurrent rod geometry, but with the addition of normal gravity buoyant flow. Cast polymethylmethacrylate (PMMA) rods of diameters ranging from 0.635 cm to 3.81 cm were burned at oxygen concentrations ranging from 14 to 18% by volume. The forced flow velocity where blowoff occurred was determined for each rod size and oxygen concentration. These blowoff limits compare favorably with the BASS-II results when the buoyant stretch is included and the flow is corrected by considering the blockage factor of the fuel. From these results, the normal gravity blowoff boundary for this axisymmetric rod geometry is determined to be linear, with oxygen concentration directly proportional to flow speed. We describe a new normal gravity 'upward flame spread test' method which extrapolates the linear blowoff boundary to the zero stretch limit in order to resolve microgravity flammability limits-something current methods cannot do. This new test method can improve spacecraft fire safety for future exploration missions by providing a tractable way to obtain good estimates of material flammability in low gravity.

  17. Decomposition of a Mixed-Valence [2Fe-2S] Cluster to Linear Tetra-Ferric and Ferrous Clusters

    PubMed Central

    Saouma, Caroline T.; Kaminsky, Werner; Mayer, James M.

    2012-01-01

    Despite the ease of preparing di-ferric [2Fe-2S] clusters, preparing stable mixed-valence analogues remains a challenge, as these clusters have limited thermal stability. Herein we identify two decomposition products of the mixed-valence thiosalicylate-ligated [2Fe-2S] cluster, [Fe2S2(SArCOO)2]3− ((SArCOO)2− = thiosalicylate). PMID:23976815

  18. Heat kernel for the elliptic system of linear elasticity with boundary conditions

    NASA Astrophysics Data System (ADS)

    Taylor, Justin; Kim, Seick; Brown, Russell

    2014-10-01

    We consider the elliptic system of linear elasticity with bounded measurable coefficients in a domain where the second Korn inequality holds. We construct heat kernel of the system subject to Dirichlet, Neumann, or mixed boundary condition under the assumption that weak solutions of the elliptic system are Hölder continuous in the interior. Moreover, we show that if weak solutions of the mixed problem are Hölder continuous up to the boundary, then the corresponding heat kernel has a Gaussian bound. In particular, if the domain is a two dimensional Lipschitz domain satisfying a corkscrew or non-tangential accessibility condition on the set where we specify Dirichlet boundary condition, then we show that the heat kernel has a Gaussian bound. As an application, we construct Green's function for elliptic mixed problem in such a domain.

  19. Identifying Glacial Meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, L. C.; Heywood, K. J.; Jenkins, A.; Kaiser, J.

    2016-02-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (Price et al., 1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  20. Identifying glacial meltwater in the Amundsen Sea, Antarctica

    NASA Astrophysics Data System (ADS)

    Biddle, Louise; Heywood, Karen; Jenkins, Adrian; Kaiser, Jan

    2016-04-01

    Pine Island Glacier, located in the Amundsen Sea, is losing mass rapidly due to relatively warm ocean waters melting its ice shelf from below. The resulting increase in meltwater production may be the root of the freshening in the Ross Sea over the last 30 years. Tracing the meltwater travelling away from the ice sheets is important in order to identify the regions most affected by the increased input of this water type. We use water mass characteristics (temperature, salinity, O2 concentration) derived from 105 CTD casts during the Ocean2ice cruise on RRS James Clark Ross in January-March 2014 to calculate meltwater fractions north of Pine Island Glacier. The data show maximum meltwater fractions at the ice front of up to 2.4 % and a plume of meltwater travelling away from the ice front along the 1027.7 kg m-3 isopycnal. We investigate the reliability of these results and attach uncertainties to the measurements made to ascertain the most reliable method of meltwater calculation in the Amundsen Sea. Processes such as atmospheric interaction and biological activity also affect the calculated apparent meltwater fractions. We analyse their effects on the reliability of the calculated meltwater fractions across the region using a bulk mixed layer model based on the one-dimensional Price-Weller-Pinkel model (1986). The model includes sea ice, dissolved oxygen concentrations and a simple respiration model, forced by NCEP climatology and an initial linear mixing profile between Winter Water (WW) and Circumpolar Deep Water (CDW). The model mimics the seasonal cycle of mixed layer warming and freshening and simulates how increases in sea ice formation and the influx of slightly cooler Lower CDW impact on the apparent meltwater fractions. These processes could result in biased meltwater signatures across the eastern Amundsen Sea.

  1. In-lake carbon dioxide concentration patterns in four distinct phases in relation to ice cover dynamics

    NASA Astrophysics Data System (ADS)

    Denfeld, B. A.; Wallin, M.; Sahlee, E.; Sobek, S.; Kokic, J.; Chmiel, H.; Weyhenmeyer, G. A.

    2014-12-01

    Global carbon dioxide (CO2) emission estimates from inland waters include emissions at ice melt that are based on simple assumptions rather than evidence. To account for CO2 accumulation below ice and potential emissions into the atmosphere at ice melt we combined continuous CO2 concentrations with spatial CO2 sampling in an ice-covered small boreal lake. From early ice cover to ice melt, our continuous surface water CO2 concentration measurements at 2 m depth showed a temporal development in four distinct phases: In early winter, CO2 accumulated continuously below ice, most likely due to biological in-lake and catchment inputs. Thereafter, in late winter, CO2 concentrations remained rather constant below ice, as catchment inputs were minimized and vertical mixing of hypolimnetic water was cut off. As ice melt began, surface water CO2 concentrations were rapidly changing, showing two distinct peaks, the first one reflecting horizontal mixing of CO2 from surface and catchment waters, the second one reflecting deep water mixing. We detected that 83% of the CO2 accumulated in the water during ice cover left the lake at ice melt which corresponded to one third of the total CO2 storage. Our results imply that CO2 emissions at ice melt must be accurately integrated into annual CO2 emission estimates from inland waters. If up-scaling approaches assume that CO2 accumulates linearly under ice and at ice melt all CO2 accumulated during ice cover period leaves the lake again, present estimates may overestimate CO2 emissions from small ice covered lakes. Likewise, neglecting CO2 spring outbursts will result in an underestimation of CO2 emissions from small ice covered lakes.

  2. Insights into plant water uptake from xylem-water isotope measurements in two tropical catchments with contrasting moisture conditions

    USGS Publications Warehouse

    Evaristo, Jaivime; McDonnell, Jeffrey J.; Scholl, Martha A.; Bruijnzeel, L. Adrian; Chun, Kwok P.

    2016-01-01

    Water transpired by trees has long been assumed to be sourced from the same subsurface water stocks that contribute to groundwater recharge and streamflow. However, recent investigations using dual water stable isotopes have shown an apparent ecohydrological separation between tree-transpired water and stream water. Here we present evidence for such ecohydrological separation in two tropical environments in Puerto Rico where precipitation seasonality is relatively low and where precipitation is positively correlated with primary productivity. We determined the stable isotope signature of xylem water of 30 mahogany (Swietenia spp.) trees sampled during two periods with contrasting moisture status. Our results suggest that the separation between transpiration water and groundwater recharge/streamflow water might be related less to the temporal phasing of hydrologic inputs and primary productivity, and more to the fundamental processes that drive evaporative isotopic enrichment of residual soil water within the soil matrix. The lack of an evaporative signature of both groundwater and streams in the study area suggests that these water balance components have a water source that is transported quickly to deeper subsurface storage compared to waters that trees use. A Bayesian mixing model used to partition source water proportions of xylem water showed that groundwater contribution was greater for valley-bottom, riparian trees than for ridge-top trees. Groundwater contribution was also greater at the xeric site than at the mesic–hydric site. These model results (1) underline the utility of a simple linear mixing model, implemented in a Bayesian inference framework, in quantifying source water contributions at sites with contrasting physiographic characteristics, and (2) highlight the informed judgement that should be made in interpreting mixing model results, of import particularly in surveying groundwater use patterns by vegetation from regional to global scales. 

  3. User's manual for interfacing a leading edge, vortex rollup program with two linear panel methods

    NASA Technical Reports Server (NTRS)

    Desilva, B. M. E.; Medan, R. T.

    1979-01-01

    Sufficient instructions are provided for interfacing the Mangler-Smith, leading edge vortex rollup program with a vortex lattice (POTFAN) method and an advanced higher order, singularity linear analysis for computing the vortex effects for simple canard wing combinations.

  4. Continuous Quantitative Measurements on a Linear Air Track

    ERIC Educational Resources Information Center

    Vogel, Eric

    1973-01-01

    Describes the construction and operational procedures of a spark-timing apparatus which is designed to record the back and forth motion of one or two carts on linear air tracks. Applications to measurements of velocity, acceleration, simple harmonic motion, and collision problems are illustrated. (CC)

  5. Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions

    DTIC Science & Technology

    2007-09-01

    C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to

  6. A step-by-step guide to non-linear regression analysis of experimental data using a Microsoft Excel spreadsheet.

    PubMed

    Brown, A M

    2001-06-01

    The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.

  7. A study of the limitations of linear theory methods as applied to sonic boom calculations

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.

    1990-01-01

    Current sonic boom minimization theories have been reviewed to emphasize the capabilities and flexibilities of the methods. Flexibility is important because it is necessary for the designer to meet optimized area constraints while reducing the impact on vehicle aerodynamic performance. Preliminary comparisons of sonic booms predicted for two Mach 3 concepts illustrate the benefits of shaping. Finally, for very simple bodies of revolution, sonic boom predictions were made using two methods - a modified linear theory method and a nonlinear method - for signature shapes which were both farfield N-waves and midfield waves. Preliminary analysis on these simple bodies verified that current modified linear theory prediction methods become inadequate for predicting midfield signatures for Mach numbers above 3. The importance of impulse is sonic boom disturbance and the importance of three-dimensional effects which could not be simulated with the bodies of revolution will determine the validity of current modified linear theory methods in predicting midfield signatures at lower Mach numbers.

  8. An Alternative Derivation of the Energy Levels of the "Particle on a Ring" System

    NASA Astrophysics Data System (ADS)

    Vincent, Alan

    1996-10-01

    All acceptable wave functions must be continuous mathematical functions. This criterion limits the acceptable functions for a particle in a linear 1-dimensional box to sine functions. If, however, the linear box is bent round into a ring, acceptable wave functions are those which are continuous at the 'join'. On this model some acceptable linear functions become unacceptable for the ring and some unacceptable cosine functions become acceptable. This approach can be used to produce a straightforward derivation of the energy levels and wave functions of the particle on a ring. These simple wave mechanical systems can be used as models of linear and cyclic delocalised systems such as conjugated hydrocarbons or the benzene ring. The promotion energy of an electron can then be used to calculate the wavelength of absorption of uv light. The simple model gives results of the correct order of magnitude and shows that, as the chain length increases, the uv maximum moves to longer wavelengths, as found experimentally.

  9. Improving Prediction Accuracy for WSN Data Reduction by Applying Multivariate Spatio-Temporal Correlation

    PubMed Central

    Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman

    2011-01-01

    This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626

  10. Phase properties of elastic waves in systems constituted of adsorbed diatomic molecules on the (001) surface of a simple cubic crystal

    NASA Astrophysics Data System (ADS)

    Deymier, P. A.; Runge, K.

    2018-03-01

    A Green's function-based numerical method is developed to calculate the phase of scattered elastic waves in a harmonic model of diatomic molecules adsorbed on the (001) surface of a simple cubic crystal. The phase properties of scattered waves depend on the configuration of the molecules. The configurations of adsorbed molecules on the crystal surface such as parallel chain-like arrays coupled via kinks are used to demonstrate not only linear but also non-linear dependency of the phase on the number of kinks along the chains. Non-linear behavior arises for scattered waves with frequencies in the vicinity of a diatomic molecule resonance. In the non-linear regime, the variation in phase with the number of kinks is formulated mathematically as unitary matrix operations leading to an analogy between phase-based elastic unitary operations and quantum gates. The advantage of elastic based unitary operations is that they are easily realizable physically and measurable.

  11. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  12. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  13. Climatic impact of Amazon deforestation - a mechanistic model study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning Zeng; Dickinson, R.E.; Xubin Zeng

    1996-04-01

    Recent general circulation model (GCM) experiments suggest a drastic change in the regional climate, especially the hydrological cycle, after hypothesized Amazon basinwide deforestation. To facilitate the theoretical understanding os such a change, we develop an intermediate-level model for tropical climatology, including atmosphere-land-ocean interaction. The model consists of linearized steady-state primitive equations with simplified thermodynamics. A simple hydrological cycle is also included. Special attention has been paid to land-surface processes. It generally better simulates tropical climatology and the ENSO anomaly than do many of the previous simple models. The climatic impact of Amazon deforestation is studied in the context of thismore » model. Model results show a much weakened Atlantic Walker-Hadley circulation as a result of the existence of a strong positive feedback loop in the atmospheric circulation system and the hydrological cycle. The regional climate is highly sensitive to albedo change and sensitive to evapotranspiration change. The pure dynamical effect of surface roughness length on convergence is small, but the surface flow anomaly displays intriguing features. Analysis of the thermodynamic equation reveals that the balance between convective heating, adiabatic cooling, and radiation largely determines the deforestation response. Studies of the consequences of hypothetical continuous deforestation suggest that the replacement of forest by desert may be able to sustain a dry climate. Scaling analysis motivated by our modeling efforts also helps to interpret the common results of many GCM simulations. When a simple mixed-layer ocean model is coupled with the atmospheric model, the results suggest a 1{degrees}C decrease in SST gradient across the equatorial Atlantic Ocean in response to Amazon deforestation. The magnitude depends on the coupling strength. 66 refs., 16 figs., 4 tabs.« less

  14. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  15. Theoretical study of mixing in liquid clouds – Part 1: Classical concepts

    DOE PAGES

    Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...

    2016-07-28

    The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less

  16. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  17. Self-Care for Nurse Leaders in Acute Care Environment Reduces Perceived Stress: A Mixed-Methods Pilot Study Merits Further Investigation.

    PubMed

    Dyess, Susan Mac Leod; Prestia, Angela S; Marquit, Doren-Elyse; Newman, David

    2018-03-01

    Acute care practice settings are stressful. Nurse leaders face stressful demands of numerous competing priorities. Some nurse leaders experience unmanageable stress, but success requires self-care. This article presents a repeated measures intervention design study using mixed methods to investigate a self-care simple meditation practice for nurse leaders. Themes and subthemes emerged in association with the three data collection points: at baseline (pretest), after 6 weeks, and after 12 weeks (posttest) from introduction of the self-care simple meditation practice. An analysis of variance yielded a statistically significant drop in perceived stress at 6 weeks and again at 12 weeks. Conducting future research is merited.

  18. The role of hot spot mix in the low-foot and high-foot implosions on the NIF

    NASA Astrophysics Data System (ADS)

    Ma, T.; Patel, P. K.; Izumi, N.; Springer, P. T.; Key, M. H.; Atherton, L. J.; Barrios, M. A.; Benedetti, L. R.; Bionta, R.; Bond, E.; Bradley, D. K.; Caggiano, J.; Callahan, D. A.; Casey, D. T.; Celliers, P. M.; Cerjan, C. J.; Church, J. A.; Clark, D. S.; Dewald, E. L.; Dittrich, T. R.; Dixit, S. N.; Döppner, T.; Dylla-Spears, R.; Edgell, D. H.; Epstein, R.; Field, J.; Fittinghoff, D. N.; Frenje, J. A.; Gatu Johnson, M.; Glenn, S.; Glenzer, S. H.; Grim, G.; Guler, N.; Haan, S. W.; Hammel, B. A.; Hatarik, R.; Herrmann, H. W.; Hicks, D.; Hinkel, D. E.; Berzak Hopkins, L. F.; Hsing, W. W.; Hurricane, O. A.; Jones, O. S.; Kauffman, R.; Khan, S. F.; Kilkenny, J. D.; Kline, J. L.; Kozioziemski, B.; Kritcher, A.; Kyrala, G. A.; Landen, O. L.; Lindl, J. D.; Le Pape, S.; MacGowan, B. J.; Mackinnon, A. J.; MacPhee, A. G.; Meezan, N. B.; Merrill, F. E.; Moody, J. D.; Moses, E. I.; Nagel, S. R.; Nikroo, A.; Pak, A.; Parham, T.; Park, H.-S.; Ralph, J. E.; Regan, S. P.; Remington, B. A.; Robey, H. F.; Rosen, M. D.; Rygg, J. R.; Ross, J. S.; Salmonson, J. D.; Sater, J.; Sayre, D.; Schneider, M. B.; Shaughnessy, D.; Sio, H.; Spears, B. K.; Smalyuk, V.; Suter, L. J.; Tommasini, R.; Town, R. P. J.; Volegov, P. L.; Wan, A.; Weber, S. V.; Widmann, K.; Wilde, C. H.; Yeamans, C.; Edwards, M. J.

    2017-05-01

    Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the "low-foot" class of implosions appears to have been mix. Lower convergence "high-foot" implosions are found to be less susceptible to mix, allowing velocities of >380 km/s to be achieved.

  19. A Practical Model for Forecasting New Freshman Enrollment during the Application Period.

    ERIC Educational Resources Information Center

    Paulsen, Michael B.

    1989-01-01

    A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)

  20. Elimination of trait blocks from multiple trait mixed model equations with singular (Co)variance parameter matrices

    USDA-ARS?s Scientific Manuscript database

    Transformations to multiple trait mixed model equations (MME) which are intended to improve computational efficiency in best linear unbiased prediction (BLUP) and restricted maximum likelihood (REML) are described. It is shown that traits that are expected or estimated to have zero residual variance...

  1. A Bayesian Semiparametric Latent Variable Model for Mixed Responses

    ERIC Educational Resources Information Center

    Fahrmeir, Ludwig; Raach, Alexander

    2007-01-01

    In this paper we introduce a latent variable model (LVM) for mixed ordinal and continuous responses, where covariate effects on the continuous latent variables are modelled through a flexible semiparametric Gaussian regression model. We extend existing LVMs with the usual linear covariate effects by including nonparametric components for nonlinear…

  2. D.b.h./crown diameter relationships in mixed Appalachian hardwood stands

    Treesearch

    Neil I. Lamson; Neil I. Lamson

    1987-01-01

    Linear regression formulae for predicting crown diameter as a function of stem diameter are presented for nine species found in 50- to 80-year-old mixed hardwood stands in north-central West Virginia. Generally, crown diameter was closely related to tolerance; more tolerant species had larger crowns.

  3. A simple eccentric stirred tank mini-bioreactor: mixing characterization and mammalian cell culture experiments.

    PubMed

    Bulnes-Abundis, David; Carrillo-Cocom, Leydi M; Aráiz-Hernández, Diana; García-Ulloa, Alfonso; Granados-Pastor, Marisa; Sánchez-Arreola, Pamela B; Murugappan, Gayathree; Alvarez, Mario M

    2013-04-01

    In industrial practice, stirred tank bioreactors are the most common mammalian cell culture platform. However, research and screening protocols at the laboratory scale (i.e., 5-100 mL) rely primarily on Petri dishes, culture bottles, or Erlenmeyer flasks. There is a clear need for simple-easy to assemble, easy to use, easy to clean-cell culture mini-bioreactors for lab-scale and/or screening applications. Here, we study the mixing performance and culture adequacy of a 30 mL eccentric stirred tank mini-bioreactor. A detailed mixing characterization of the proposed bioreactor is presented. Laser induced fluorescence (LIF) experiments and computational fluid dynamics (CFD) computations are used to identify the operational conditions required for adequate mixing. Mammalian cell culture experiments were conducted with two different cell models. The specific growth rate and the maximum cell density of Chinese hamster ovary (CHO) cell cultures grown in the mini-bioreactor were comparable to those observed for 6-well culture plates, Erlenmeyer flasks, and 1 L fully instrumented bioreactors. Human hematopoietic stem cells were successfully expanded tenfold in suspension conditions using the eccentric mini-bioreactor system. Our results demonstrate good mixing performance and suggest the practicality and adequacy of the proposed mini-bioreactor. Copyright © 2012 Wiley Periodicals, Inc.

  4. Replica exchange and expanded ensemble simulations as Gibbs sampling: simple improvements for enhanced mixing.

    PubMed

    Chodera, John D; Shirts, Michael R

    2011-11-21

    The widespread popularity of replica exchange and expanded ensemble algorithms for simulating complex molecular systems in chemistry and biophysics has generated much interest in discovering new ways to enhance the phase space mixing of these protocols in order to improve sampling of uncorrelated configurations. Here, we demonstrate how both of these classes of algorithms can be considered as special cases of Gibbs sampling within a Markov chain Monte Carlo framework. Gibbs sampling is a well-studied scheme in the field of statistical inference in which different random variables are alternately updated from conditional distributions. While the update of the conformational degrees of freedom by Metropolis Monte Carlo or molecular dynamics unavoidably generates correlated samples, we show how judicious updating of the thermodynamic state indices--corresponding to thermodynamic parameters such as temperature or alchemical coupling variables--can substantially increase mixing while still sampling from the desired distributions. We show how state update methods in common use can lead to suboptimal mixing, and present some simple, inexpensive alternatives that can increase mixing of the overall Markov chain, reducing simulation times necessary to obtain estimates of the desired precision. These improved schemes are demonstrated for several common applications, including an alchemical expanded ensemble simulation, parallel tempering, and multidimensional replica exchange umbrella sampling.

  5. A millisecond micromixer via single-bubble-based acoustic streaming.

    PubMed

    Ahmed, Daniel; Mao, Xiaole; Shi, Jinjie; Juluri, Bala Krishna; Huang, Tony Jun

    2009-09-21

    We present ultra-fast homogeneous mixing inside a microfluidic channel via single-bubble-based acoustic streaming. The device operates by trapping an air bubble within a "horse-shoe" structure located between two laminar flows inside a microchannel. Acoustic waves excite the trapped air bubble at its resonance frequency, resulting in acoustic streaming, which disrupts the laminar flows and triggers the two fluids to mix. Due to this technique's simple design, excellent mixing performance, and fast mixing speed (a few milliseconds), our single-bubble-based acoustic micromixer may prove useful for many biochemical studies and applications.

  6. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme.

    PubMed

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.

  7. A simple recipe for setting up the flux equations of cyclic and linear reaction schemes of ion transport with a high number of states: The arrow scheme

    PubMed Central

    Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra

    2016-01-01

    abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356

  8. A simple method for estimation of coagulation efficiency in mixed aerosols. [environmental pollution control

    NASA Technical Reports Server (NTRS)

    Dimmick, R. L.; Boyd, A.; Wolochow, H.

    1975-01-01

    Aerosols of KBr and AgNO3 were mixed, exposed to light in a glass tube and collected in the dark. About 15% of the collected material was reduced to silver upon development. Thus, two aerosols of particles that react to form a photo-reducible compound can be used to measure coagulation efficiency.

  9. An Automated Statistical Process Control Study of Inline Mixing Using Spectrophotometric Detection

    ERIC Educational Resources Information Center

    Dickey, Michael D.; Stewart, Michael D.; Willson, C. Grant

    2006-01-01

    An experiment is described, which is designed for a junior-level chemical engineering "fundamentals of measurements and data analysis" course, where students are introduced to the concept of statistical process control (SPC) through a simple inline mixing experiment. The students learn how to create and analyze control charts in an effort to…

  10. Linking diatom deposition in a deep lake with the spring temperature gradient (Tiefer See, NE Germany)

    NASA Astrophysics Data System (ADS)

    Kienel, Ulrike; Kirillin, Georgiy; Brademann, Brian; Plessen, Birgit; Brauer, Achim

    2015-04-01

    Monitoring of deep Lake Tiefer See showed a much larger deposition of diatoms following ice out and a rapid spring stratification in mid April 2013 compared to that following the gradual warming and stratification in mid April 2012. The manifold of diatom individuals in 2013 compared to 2012 amounted to calculated 2.0 compared to 0.15 g silica per square meter and day. The striking difference was the two orders of magnitude larger number of Stephanodiscus sp. in 2013, which were only a minor component in 2012. The monitored weather and lake conditions suggest the 2013-spring bloom was boosted by a quick succession of ice breakup, spring turnover, and stratification leading to nutrient recycling and rapidly improved light conditions. The comparatively longer mixing in spring 2012, calculated using the lake-temperature model FLake, caused population losses that impeded bloom development. To verify the exemplified inverse relation of diatom deposition and mixing duration in spring we use the subannually laminated, recent sediment record of Lake Tiefer See (AD 1924 - 2008), the instrumental series from the meteorological station in Schwerin, and model simulations of the spring mixing. The mixing duration was calculated as the period between water temperatures of 4°C and a mixing depth of 6 m were reached for the period 1951 - 2008. To cover the full sediment record a simple estimate of the mixing period was calculated from mean temperatures, i.e. the temperature duration from the first 5°C-day to the first of ≥5°C days. The annual diatom deposition was calculated as the annual average µXRF-counts of Si in the sediment record (AD 1924-2008), based on negligible amounts of detrital Si, low deposition of inorganic matter during winter, and a striking balance of IM deposition and Si deposition calculated from the diatom frustules deposited. We find support for the linear and inverse relation of diatom silica deposition with the duration of spring mixing using the modeled mixing duration with 25% explained variability and with 20% using the temperature relation, respectively. The explanation increases to 49%, respectively 53% when the period after AD1980 is removed from the data set. The lack of diatom response during this period is supposed to relate to the primary influence of nutrients from intensive manuring and drainage in the catchment on the algal development at that time.

  11. Mixed-mode oscillations and chaos in a prey-predator system with dormancy of predators.

    PubMed

    Kuwamura, Masataka; Chiba, Hayato

    2009-12-01

    It is shown that the dormancy of predators induces mixed-mode oscillations and chaos in the population dynamics of a prey-predator system under certain conditions. The mixed-mode oscillations and chaos are shown to bifurcate from a coexisting equilibrium by means of the theory of fast-slow systems. These results may help to find experimental conditions under which one can demonstrate chaotic population dynamics in a simple phytoplankton-zooplankton (-resting eggs) community in a microcosm with a short duration.

  12. One-pot in situ mixed film formation by azo coupling and diazonium salt electrografting.

    PubMed

    Esnault, Charles; Delorme, Nicolas; Louarn, Guy; Pilard, Jean-François

    2013-06-24

    So simple: The in situ synthesis of an aryldiazonium salt and an azo-aryldiazonium salt by azo coupling from sulfanilic acid and aniline is reported. Formation of a mixed organic layer is monitored by cyclic voltammetry and atomic force microscopy. A compact mixed layer is obtained with a global roughness of 0.4 nm and 10-15 % vertical extension in the range 1.5-6 nm. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Mixing of ultrasonic Lamb waves in thin plates with quadratic nonlinearity.

    PubMed

    Li, Feilong; Zhao, Youxuan; Cao, Peng; Hu, Ning

    2018-07-01

    This paper investigates the propagation of Lamb waves in thin plates with quadratic nonlinearity by one-way mixing method using numerical simulations. It is shown that an A 0 -mode wave can be generated by a pair of S 0 and A 0 mode waves only when mixing condition is satisfied, and mixing wave signals are capable of locating the damage zone. Additionally, it is manifested that the acoustic nonlinear parameter increases linearly with quadratic nonlinearity but monotonously with the size of mixing zone. Furthermore, because of frequency deviation, the waveform of the mixing wave changes significantly from a regular diamond shape to toneburst trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. A simple theory of motor protein kinetics and energetics. II.

    PubMed

    Qian, H

    2000-01-10

    A three-state stochastic model of motor protein [Qian, Biophys. Chem. 67 (1997) pp. 263-267] is further developed to illustrate the relationship between the external load on an individual motor protein in aqueous solution with various ATP concentrations and its steady-state velocity. A wide variety of dynamic motor behavior are obtained from this simple model. For the particular case of free-load translocation being the most unfavorable step within the hydrolysis cycle, the load-velocity curve is quasi-linear, V/Vmax = (cF/Fmax-c)/(1-c), in contrast to the hyperbolic relationship proposed by A.V. Hill for macroscopic muscle. Significant deviation from the linearity is expected when the velocity is less than 10% of its maximal (free-load) value--a situation under which the processivity of motor diminishes and experimental observations are less certain. We then investigate the dependence of load-velocity curve on ATP (ADP) concentration. It is shown that the free load Vmax exhibits a Michaelis-Menten like behavior, and the isometric Fmax increases linearly with ln([ATP]/[ADP]). However, the quasi-linear region is independent of the ATP concentration, yielding an apparently ATP-independent maximal force below the true isometric force. Finally, the heat production as a function of ATP concentration and external load are calculated. In simple terms and solved with elementary algebra, the present model provides an integrated picture of biochemical kinetics and mechanical energetics of motor proteins.

  15. Amplitude Frequency Response Measurement: A Simple Technique

    ERIC Educational Resources Information Center

    Satish, L.; Vora, S. C.

    2010-01-01

    A simple method is described to combine a modern function generator and a digital oscilloscope to configure a setup that can directly measure the amplitude frequency response of a system. This is achieved by synchronously triggering both instruments, with the function generator operated in the "Linear-Sweep" frequency mode, while the oscilloscope…

  16. Radio Propagation Prediction Software for Complex Mixed Path Physical Channels

    DTIC Science & Technology

    2006-08-14

    63 4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz 69 4.4.7. Projected Scaling to...4.4.6. Applied Linear Regression Analysis in the Frequency Range 1-50 MHz In order to construct a comprehensive numerical algorithm capable of

  17. Mixed Linear/Square-Root Encoded Single Slope Ramp Provides a Fast, Low Noise Analog to Digital Converter with Very High Linearity for Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Cunningham, Thomas J. (Inventor); Newton, Kenneth W. (Inventor)

    2014-01-01

    An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.

  18. Iterative methods for mixed finite element equations

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.; Nagtegaal, J. C.; Zienkiewicz, O. C.

    1985-01-01

    Iterative strategies for the solution of indefinite system of equations arising from the mixed finite element method are investigated in this paper with application to linear and nonlinear problems in solid and structural mechanics. The augmented Hu-Washizu form is derived, which is then utilized to construct a family of iterative algorithms using the displacement method as the preconditioner. Two types of iterative algorithms are implemented. Those are: constant metric iterations which does not involve the update of preconditioner; variable metric iterations, in which the inverse of the preconditioning matrix is updated. A series of numerical experiments is conducted to evaluate the numerical performance with application to linear and nonlinear model problems.

  19. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    PubMed

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  20. Heavy neutrino mixing and single production at linear collider

    NASA Astrophysics Data System (ADS)

    Gluza, J.; Maalampi, J.; Raidal, M.; Zrałek, M.

    1997-02-01

    We study the single production of heavy neutrinos via the processes e- e+ -> νN and e- γ -> W- N at future linear colliders. As a base of our considerations we take a wide class of models, both with vanishing and non-vanishing left-handed Majorana neutrino mass matrix mL. We perform a model independent analyses of the existing experimental data and find connections between the characteristic of heavy neutrinos (masses, mixings, CP eigenvalues) and the mL parameters. We show that with the present experimental constraints heavy neutrino masses almost up to the collision energy can be tested in the future experiments.

  1. Identifying pleiotropic genes in genome-wide association studies from related subjects using the linear mixed model and Fisher combination function.

    PubMed

    Yang, James J; Williams, L Keoki; Buu, Anne

    2017-08-24

    A multivariate genome-wide association test is proposed for analyzing data on multivariate quantitative phenotypes collected from related subjects. The proposed method is a two-step approach. The first step models the association between the genotype and marginal phenotype using a linear mixed model. The second step uses the correlation between residuals of the linear mixed model to estimate the null distribution of the Fisher combination test statistic. The simulation results show that the proposed method controls the type I error rate and is more powerful than the marginal tests across different population structures (admixed or non-admixed) and relatedness (related or independent). The statistical analysis on the database of the Study of Addiction: Genetics and Environment (SAGE) demonstrates that applying the multivariate association test may facilitate identification of the pleiotropic genes contributing to the risk for alcohol dependence commonly expressed by four correlated phenotypes. This study proposes a multivariate method for identifying pleiotropic genes while adjusting for cryptic relatedness and population structure between subjects. The two-step approach is not only powerful but also computationally efficient even when the number of subjects and the number of phenotypes are both very large.

  2. Small area estimation for semicontinuous data.

    PubMed

    Chandra, Hukum; Chambers, Ray

    2016-03-01

    Survey data often contain measurements for variables that are semicontinuous in nature, i.e. they either take a single fixed value (we assume this is zero) or they have a continuous, often skewed, distribution on the positive real line. Standard methods for small area estimation (SAE) based on the use of linear mixed models can be inefficient for such variables. We discuss SAE techniques for semicontinuous variables under a two part random effects model that allows for the presence of excess zeros as well as the skewed nature of the nonzero values of the response variable. In particular, we first model the excess zeros via a generalized linear mixed model fitted to the probability of a nonzero, i.e. strictly positive, value being observed, and then model the response, given that it is strictly positive, using a linear mixed model fitted on the logarithmic scale. Empirical results suggest that the proposed method leads to efficient small area estimates for semicontinuous data of this type. We also propose a parametric bootstrap method to estimate the MSE of the proposed small area estimator. These bootstrap estimates of the MSE are compared to the true MSE in a simulation study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    PubMed

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  4. Spark formation as a moving boundary process

    NASA Astrophysics Data System (ADS)

    Ebert, Ute

    2006-03-01

    The growth process of spark channels recently becomes accessible through complementary methods. First, I will review experiments with nanosecond photographic resolution and with fast and well defined power supplies that appropriately resolve the dynamics of electric breakdown [1]. Second, I will discuss the elementary physical processes as well as present computations of spark growth and branching with adaptive grid refinement [2]. These computations resolve three well separated scales of the process that emerge dynamically. Third, this scale separation motivates a hierarchy of models on different length scales. In particular, I will discuss a moving boundary approximation for the ionization fronts that generate the conducting channel. The resulting moving boundary problem shows strong similarities with classical viscous fingering. For viscous fingering, it is known that the simplest model forms unphysical cusps within finite time that are suppressed by a regularizing condition on the moving boundary. For ionization fronts, we derive a new condition on the moving boundary of mixed Dirichlet-Neumann type (φ=ɛnφ) that indeed regularizes all structures investigated so far. In particular, we present compact analytical solutions with regularization, both for uniformly translating shapes and for their linear perturbations [3]. These solutions are so simple that they may acquire a paradigmatic role in the future. Within linear perturbation theory, they explicitly show the convective stabilization of a curved front while planar fronts are linearly unstable against perturbations of arbitrary wave length. [1] T.M.P. Briels, E.M. van Veldhuizen, U. Ebert, TU Eindhoven. [2] C. Montijn, J. Wackers, W. Hundsdorfer, U. Ebert, CWI Amsterdam. [3] B. Meulenbroek, U. Ebert, L. Schäfer, Phys. Rev. Lett. 95, 195004 (2005).

  5. Dynamical relationship between wind speed magnitude and meridional temperature contrast: Application to an interannual oscillation in Venusian middle atmosphere GCM

    NASA Astrophysics Data System (ADS)

    Yamamoto, Masaru; Takahashi, Masaaki

    2018-03-01

    We derive simple dynamical relationships between wind speed magnitude and meridional temperature contrast. The relationship explains scatter plot distributions of time series of three variables (maximum zonal wind speed UMAX, meridional wind speed VMAX, and equator-pole temperature contrast dTMAX), which are obtained from a Venus general circulation model with equatorial Kelvin-wave forcing. Along with VMAX and dTMAX, UMAX likely increases with the phase velocity and amplitude of a forced wave. In the scatter diagram of UMAX versus dTMAX, points are plotted along a linear equation obtained from a thermal-wind relationship in the cloud layer. In the scatter diagram of VMAX versus UMAX, the apparent slope is somewhat steep in the high UMAX regime, compared with the low UMAX regime. The scatter plot distributions are qualitatively consistent with a quadratic equation obtained from a diagnostic equation of the stream function above the cloud top. The plotted points in the scatter diagrams form a linear cluster for weak wave forcing, whereas they form a small cluster for strong wave forcing. An interannual oscillation of the general circulation forming the linear cluster in the scatter diagram is apparent in the experiment of weak 5.5-day wave forcing. Although a pair of equatorial Kelvin and high-latitude Rossby waves with a same period (Kelvin-Rossby wave) produces equatorward heat and momentum fluxes in the region below 60 km, the equatorial wave does not contribute to the long-period oscillation. The interannual fluctuation of the high-latitude jet core leading to the time variation of UMAX is produced by growth and decay of a polar mixed Rossby-gravity wave with a 14-day period.

  6. Resolving Mixed Algal Species in Hyperspectral Images

    PubMed Central

    Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.

    2014-01-01

    We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451

  7. Scaling Laws of Nonlinear Rayleigh-Taylor and Richtmyer-Meshkov Instabilities in Two and Three Dimensions (IFSA 1999)

    NASA Astrophysics Data System (ADS)

    Shvarts, D.; Oron, D.; Kartoon, D.; Rikanati, A.; Sadot, O.; Srebro, Y.; Yedvab, Y.; Ofer, D.; Levin, A.; Sarid, E.; Ben-Dor, G.; Erez, L.; Erez, G.; Yosef-Hai, A.; Alon, U.; Arazi, L.

    2016-10-01

    The late-time nonlinear evolution of the Rayleigh-Taylor (RT) and Richtmyer-Meshkov (RM) instabilities for random initial perturbations is investigated using a statistical mechanics model based on single-mode and bubble-competition physics at all Atwood numbers (A) and full numerical simulations in two and three dimensions. It is shown that the RT mixing zone bubble and spike fronts evolve as h ~ α · A · gt2 with different values of a for the bubble and spike fronts. The RM mixing zone fronts evolve as h ~ tθ with different values of θ for bubbles and spikes. Similar analysis yields a linear growth with time of the Kelvin-Helmholtz mixing zone. The dependence of the RT and RM scaling parameters on A and the dimensionality will be discussed. The 3D predictions are found to be in good agreement with recent Linear Electric Motor (LEM) experiments.

  8. Linear mixing model applied to AVHRR LAC data

    NASA Technical Reports Server (NTRS)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  9. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  10. Transportable Maps Software. Volume I.

    DTIC Science & Technology

    1982-07-01

    being collected at the beginning or end of the routine. This allows the interaction to be followed sequentially through its steps by anyone reading the...flow is either simple sequential , simple conditional (the equivalent of ’if-then-else’), simple iteration (’DO-loop’), or the non-linear recursion...input raster images to be in the form of sequential binary files with a SEGMENTED record type. The advantage of this form is that large logical records

  11. Simple, explicitly time-dependent, and regular solutions of the linearized vacuum Einstein equations in Bondi-Sachs coordinates

    NASA Astrophysics Data System (ADS)

    Mädler, Thomas

    2013-05-01

    Perturbations of the linearized vacuum Einstein equations in the Bondi-Sachs formulation of general relativity can be derived from a single master function with spin weight two, which is related to the Weyl scalar Ψ0, and which is determined by a simple wave equation. By utilizing a standard spin representation of tensors on a sphere and two different approaches to solve the master equation, we are able to determine two simple and explicitly time-dependent solutions. Both solutions, of which one is asymptotically flat, comply with the regularity conditions at the vertex of the null cone. For the asymptotically flat solution we calculate the corresponding linearized perturbations, describing all multipoles of spin-2 waves that propagate on a Minkowskian background spacetime. We also analyze the asymptotic behavior of this solution at null infinity using a Penrose compactification and calculate the Weyl scalar Ψ4. Because of its simplicity, the asymptotically flat solution presented here is ideally suited for test bed calculations in the Bondi-Sachs formulation of numerical relativity. It may be considered as a sibling of the Bergmann-Sachs or Teukolsky-Rinne solutions, on spacelike hypersurfaces, for a metric adapted to null hypersurfaces.

  12. Dynamical heterogeneities and mechanical non-linearities: Modeling the onset of plasticity in polymer in the glass transition.

    PubMed

    Masurel, R J; Gelineau, P; Lequeux, F; Cantournet, S; Montes, H

    2017-12-27

    In this paper we focus on the role of dynamical heterogeneities on the non-linear response of polymers in the glass transition domain. We start from a simple coarse-grained model that assumes a random distribution of the initial local relaxation times and that quantitatively describes the linear viscoelasticity of a polymer in the glass transition regime. We extend this model to non-linear mechanics assuming a local Eyring stress dependence of the relaxation times. Implementing the model in a finite element mechanics code, we derive the mechanical properties and the local mechanical fields at the beginning of the non-linear regime. The model predicts a narrowing of distribution of relaxation times and the storage of a part of the mechanical energy --internal stress-- transferred to the material during stretching in this temperature range. We show that the stress field is not spatially correlated under and after loading and follows a Gaussian distribution. In addition the strain field exhibits shear bands, but the strain distribution is narrow. Hence, most of the mechanical quantities can be calculated analytically, in a very good approximation, with the simple assumption that the strain rate is constant.

  13. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  14. Linear aerospike engine study. [for reusable launch vehicles

    NASA Technical Reports Server (NTRS)

    Diem, H. G.; Kirby, F. M.

    1977-01-01

    Parametric data on split-combustor linear engine propulsion systems are presented for use in mixed-mode single-stage-to-orbit (SSTO) vehicle studies. Preliminary design data for two selected engine systems are included. The split combustor was investigated for mixed-mode operations with oxygen/hydrogen propellants used in the inner combustor in Mode 2, and in conjunction with either oxygen/RP-1, oxygen/RJ-5, O2/CH4, or O2/H2 propellants in the outer combustor for Mode 1. Both gas generator and staged combustion power cycles were analyzed for providing power to the turbopumps of the inner and outer combustors. Numerous cooling circuits and cooling fluids (propellants) were analyzed and hydrogen was selected as the preferred coolant for both combustors and the linear aerospike nozzle. The maximum operating chamber pressure was determined to be limited by the availability of hydrogen coolant pressure drop in the coolant circuit.

  15. Rapid magnetic microfluidic mixer utilizing AC electromagnetic field.

    PubMed

    Wen, Chih-Yung; Yeh, Cheng-Peng; Tsai, Chien-Hsiung; Fu, Lung-Ming

    2009-12-01

    This paper presents a novel simple micromixer based on stable water suspensions of magnetic nanoparticles (i.e. ferrofluids). The micromixer chip is built using standard microfabrication and simple soft lithography, and the design can be incorporated as a subsystem into any chemical microreactor or a miniaturized biological sensor. An electromagnet driven by an AC power source is used to induce transient interactive flows between a ferrofluid and Rhodamine B. The alternative magnetic field causes the ferrofluid to expand significantly and uniformly toward Rhodamine B, associated with a great number of extremely fine fingering structures on the interface in the upstream and downstream regions of the microchannel. These pronounced fingering patterns, which have not been observed by other active mixing methods utilizing only magnetic force, increase the mixing interfacial length dramatically. Along with the dominant diffusion effects occurring around the circumferential regions of the fine finger structures, the mixing efficiency increases significantly. The miscible fingering instabilities are observed and applied in the microfluidics for the first time. This work is carried with a view to developing functionalized ferrofluids that can be used as sensitive pathogen detectors and the present experimental results demonstrate that the proposed micromixer has excellent mixing capabilities. The mixing efficiency can be as high as 95% within 2.0 s and a distance of 3.0 mm from the inlet of the mixing channel, when the applied peak magnetic field is higher than 29.2 Oe and frequency ranges from 45 to 300 Hz.

  16. The role of hot spot mix in the low-foot and high-foot implosions on the NIF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T.; Patel, P. K.; Izumi, N.

    Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the “low-foot” class of implosions appears to have been mix. As a result, lower convergence “high-foot” implosions are found to be lessmore » susceptible to mix, allowing velocities of >380 km/s to be achieved.« less

  17. The role of hot spot mix in the low-foot and high-foot implosions on the NIF

    DOE PAGES

    Ma, T.; Patel, P. K.; Izumi, N.; ...

    2017-05-18

    Hydrodynamic mix of the ablator into the DT fuel layer and hot spot can be a critical performance limitation in inertial confinement fusion implosions. This mix results in increased radiation loss, cooling of the hot spot, and reduced neutron yield. To quantify the level of mix, we have developed a simple model that infers the level of contamination using the ratio of the measured x-ray emission to the neutron yield. The principal source for the performance limitation of the “low-foot” class of implosions appears to have been mix. As a result, lower convergence “high-foot” implosions are found to be lessmore » susceptible to mix, allowing velocities of >380 km/s to be achieved.« less

  18. Application of the urban mixing-depth concept to air pollution problems

    Treesearch

    Peter W. Summers

    1977-01-01

    A simple urban mixing-depth model is used to develop an indicator of downtown pollution concentrations based on emission strength, rural temperature lapse rate, wind speed, city heat input, and city size. It is shown that the mean annual downtown suspended particulate levels in Canadian cities are proportional to the fifth root of the population. The implications of...

  19. A Colorful Mixing Experiment in a Stirred Tank Using Non-Newtonian Blue Maize Flour Suspensions

    ERIC Educational Resources Information Center

    Trujilo-de Santiago, Grissel; Rojas-de Gante, Cecillia; García-Lara, Silverio; Ballesca´-Estrada, Adriana; Alvarez, Marion Moise´s

    2014-01-01

    A simple experiment designed to study mixing of a material of complex rheology in a stirred tank is described. Non-Newtonian suspensions of blue maize flour that naturally contain anthocyanins have been chosen as a model fluid. These anthocyanins act as a native, wide spectrum pH indicator exhibiting greenish colors in alkaline environments, blue…

  20. A Simple Theory to Predict Small Changes in Volume and Refractivity During Mixing of a Two-Component Liquid System.

    ERIC Educational Resources Information Center

    Aminabhavi, Tejraj M.

    1983-01-01

    Discusses a set of relations (addressing changes in volume and refractivity) for use in the study of binary systems. Suggests including such an experiment in undergraduate physical chemistry courses (measuring density/refractive index of pure compounds and their mixtures) to predict even small changes occurring during mixing process. (Author/JN)

  1. Magnetically Actuated Cilia for Microfluidic Manipulation

    NASA Astrophysics Data System (ADS)

    Hanasoge, Srinivas; Owen, Drew; Ballard, Matt; Hesketh, Peter J.; Alexeev, Alexander; Woodruff School of Mechanical Engineering Collaboration; Petit InstituteBioengineering; Biosciences Collaboration

    2015-11-01

    We demonstrate magnetic micro-cilia based microfluidic mixing and capture techniques. For this, we use a simple and easy to fabricate high aspect ratio cilia, which are actuated magnetically. These micro-features are fabricated by evaporating NiFe alloy at room temperature, on to patterned photoresist. The evaporated alloy curls upwards when the seed layer is removed to release the cilia, thus making a free standing `C' shaped magnetic microstructure. This is actuated using an external electromagnet or a rotating magnet. The artificial cilia can be actuated upto 20Hz. We demonstrate the active mixing these cilia can produce in the microchannel. Also, we demonstrate the capture of target species in a sample using these fast oscillating cilia. The surface of the cilia is functionalized by streptavidin which binds to biotin labelled fluorescent microspheres and mimic the capture of bacteria. We show very high capture efficiencies by using these methods. These simple to fabricate micro cilia can easily be incorporated into many microfluidic systems which require high mixing and capture efficiencies.

  2. A novel algorithm for laser self-mixing sensors used with the Kalman filter to measure displacement

    NASA Astrophysics Data System (ADS)

    Sun, Hui; Liu, Ji-Gou

    2018-07-01

    This paper proposes a simple and effective method for estimating the feedback level factor C in a self-mixing interferometric sensor. It is used with a Kalman filter to retrieve the displacement. Without the complicated and onerous calculation process of the general C estimation method, a final equation is obtained. Thus, the estimation of C only involves a few simple calculations. It successfully retrieves the sinusoidal and aleatory displacement by means of simulated self-mixing signals in both weak and moderate feedback regimes. To deal with the errors resulting from noise and estimate bias of C and to further improve the retrieval precision, a Kalman filter is employed following the general phase unwrapping method. The simulation and experiment results show that the retrieved displacement using the C obtained with the proposed method is comparable to the joint estimation of C and α. Besides, the Kalman filter can significantly decrease measurement errors, especially the error caused by incorrectly locating the peak and valley positions of the signal.

  3. On the analysis of photo-electron spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, C.-Z., E-mail: gao@irsamc.ups-tlse.fr; CNRS, LPT; Dinh, P.M.

    2015-09-15

    We analyze Photo-Electron Spectra (PES) for a variety of excitation mechanisms from a simple mono-frequency laser pulse to involved combination of pulses as used, e.g., in attosecond experiments. In the case of simple pulses, the peaks in PES reflect the occupied single-particle levels in combination with the given laser frequency. This usual, simple rule may badly fail in the case of excitation pulses with mixed frequencies and if resonant modes of the system are significantly excited. We thus develop an extension of the usual rule to cover all possible excitation scenarios, including mixed frequencies in the attosecond regime. We find thatmore » the spectral distributions of dipole, monopole and quadrupole power for the given excitation taken together and properly shifted by the single-particle energies provide a pertinent picture of the PES in all situations. This leads to the derivation of a generalized relation allowing to understand photo-electron yields even in complex experimental setups.« less

  4. A simple reaction-rate model for turbulent diffusion flames

    NASA Technical Reports Server (NTRS)

    Bangert, L. H.

    1975-01-01

    A simple reaction rate model is proposed for turbulent diffusion flames in which the reaction rate is proportional to the turbulence mixing rate. The reaction rate is also dependent on the mean mass fraction and the mean square fluctuation of mass fraction of each reactant. Calculations are compared with experimental data and are generally successful in predicting the measured quantities.

  5. Chromium isotope variation along a contaminated groundwater plume: a coupled Cr(VI)- reduction, advective mixing perspective

    NASA Astrophysics Data System (ADS)

    Bullen, T.; Izbicki, J.

    2007-12-01

    Chromium (Cr) is a common contaminant in groundwater, used in electroplating, leather tanning, wood preservation, and as an anti-corrosion agent. Cr occurs in two oxidation states in groundwater: Cr(VI) is highly soluble and mobile, and is a carcinogen; Cr(III) is generally insoluble, immobile and less toxic than Cr(VI). Reduction of Cr(VI) to Cr(III) is thus a central issue in approaches to Cr(VI) contaminant remediation in aquifers. Aqueous Cr(VI) occurs mainly as the chromate (CrO22-) and bichromate (HCrO2-) oxyanions, while Cr(III) is mainly "hexaquo" Cr(H2O)63+. Cr has four naturally-occurring stable isotopes: 50Cr, 52Cr, 53Cr and 54Cr. When Cr(VI) is reduced to Cr(III), the strong Cr-O bond must be broken, resulting in isotopic selection. Ellis et al. (2002) demonstrated that for reduction of Cr(VI) on magnetite and in natural sediment slurries, the change of isotopic composition of the remnant Cr(VI) pool was described by a Rayleigh fractionation model having fractionation factor ɛCr(VI)-Cr(III) = 3.4‰. We attempted to use Cr isotopes as a monitor of Cr(VI) reduction at a field site in Hinkley, California (USA) where groundwater contaminated with Cr(VI) has been under assessment for remediation. Groundwater containing up to 5 ppm Cr(VI) has migrated down-gradient from the contamination source through the fluvial to alluvial sediments to form a well-defined plume. Uncontaminated groundwater in the aquifer immediately adjacent to the plume has naturally-occurring Cr(VI) of 4 ppb or less (CH2M-Hill). In early 2006, colleagues from CH2M-Hill collected 17 samples of groundwater from within and adjacent to the plume. On a plot of δ53Cr vs. log Cr(VI), the data array is strikingly linear and differs markedly from the trend predicted for reduction of Cr(VI) in the contaminated water. There appear to be two groups of data: four samples with δ53Cr >+2‰ and Cr(VI) <4 ppb, and 13 samples with δ53Cr <+2‰ and Cr(VI) >15 ppb. Simple mixing lines between the groundwater samples having <4 ppb Cr(VI), taken to be representative of regional groundwater, and the contaminated water do not pass through the remainder of the data, discounting a simple advective mixing scenario. We hypothesize a more likely scenario that involves both Cr(VI) reduction and advective mixing. As the plume initially expands downgradient, Cr(VI) in water at the leading edge encounters reductant in the aquifer resulting in limited Cr(VI) reduction. As a result of reduction, δ53Cr of Cr(VI) remaining in solution at the leading edge increases along the "reduction" trend from 0 to ~+2‰. Inevitable mixing of this water at the leading edge with regional groundwater results in a suitable mixing end-member to combine with Cr(VI) within the plume in order to explain the bulk of the remaining data. Neither Cr(VI) reduction nor advective mixing of plume and regional groundwaters can explain the data on their own, implying an interplay of at least these two processes during plume evolution. Ellis, A.S., Johnson, T.M. and Bullen, T.D. 2002, Science, 295, 2060-2062.

  6. Bayesian analysis of volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Ho, Chih-Hsiang

    1990-10-01

    The simple Poisson model generally gives a good fit to many volcanoes for volcanic eruption forecasting. Nonetheless, empirical evidence suggests that volcanic activity in successive equal time-periods tends to be more variable than a simple Poisson with constant eruptive rate. An alternative model is therefore examined in which eruptive rate(λ) for a given volcano or cluster(s) of volcanoes is described by a gamma distribution (prior) rather than treated as a constant value as in the assumptions of a simple Poisson model. Bayesian analysis is performed to link two distributions together to give the aggregate behavior of the volcanic activity. When the Poisson process is expanded to accomodate a gamma mixing distribution on λ, a consequence of this mixed (or compound) Poisson model is that the frequency distribution of eruptions in any given time-period of equal length follows the negative binomial distribution (NBD). Applications of the proposed model and comparisons between the generalized model and simple Poisson model are discussed based on the historical eruptive count data of volcanoes Mauna Loa (Hawaii) and Etna (Italy). Several relevant facts lead to the conclusion that the generalized model is preferable for practical use both in space and time.

  7. Who mixes with whom among men who have sex with men? Implications for modelling the HIV epidemic in southern India

    PubMed Central

    Mitchell, K.M.; Foss, A.M.; Prudden, H.J.; Mukandavire, Z.; Pickles, M.; Williams, J.R.; Johnson, H.C.; Ramesh, B.M.; Washington, R.; Isac, S.; Rajaram, S.; Phillips, A.E.; Bradley, J.; Alary, M.; Moses, S.; Lowndes, C.M.; Watts, C.H.; Boily, M.-C.; Vickerman, P.

    2014-01-01

    In India, the identity of men who have sex with men (MSM) is closely related to the role taken in anal sex (insertive, receptive or both), but little is known about sexual mixing between identity groups. Both role segregation (taking only the insertive or receptive role) and the extent of assortative (within-group) mixing are known to affect HIV epidemic size in other settings and populations. This study explores how different possible mixing scenarios, consistent with behavioural data collected in Bangalore, south India, affect both the HIV epidemic, and the impact of a targeted intervention. Deterministic models describing HIV transmission between three MSM identity groups (mostly insertive Panthis/Bisexuals, mostly receptive Kothis/Hijras and versatile Double Deckers), were parameterised with behavioural data from Bangalore. We extended previous models of MSM role segregation to allow each of the identity groups to have both insertive and receptive acts, in differing ratios, in line with field data. The models were used to explore four different mixing scenarios ranging from assortative (maximising within-group mixing) to disassortative (minimising within-group mixing). A simple model was used to obtain insights into the relationship between the degree of within-group mixing, R0 and equilibrium HIV prevalence under different mixing scenarios. A more complex, extended version of the model was used to compare the predicted HIV prevalence trends and impact of an HIV intervention when fitted to data from Bangalore. With the simple model, mixing scenarios with increased amounts of assortative (within-group) mixing tended to give rise to a higher R0 and increased the likelihood that an epidemic would occur. When the complex model was fit to HIV prevalence data, large differences in the level of assortative mixing were seen between the fits identified using different mixing scenarios, but little difference was projected in future HIV prevalence trends. An oral pre-exposure prophylaxis (PrEP) intervention was modelled, targeted at the different identity groups. For intervention strategies targeting the receptive or receptive and versatile MSM together, the overall impact was very similar for different mixing patterns. However, for PrEP scenarios targeting insertive or versatile MSM alone, the overall impact varied considerably for different mixing scenarios; more impact was achieved with greater levels of disassortative mixing. PMID:24727187

  8. Linear models for sound from supersonic reacting mixing layers

    NASA Astrophysics Data System (ADS)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  9. Mixed species radioiodine air sampling readout and dose assessment system

    DOEpatents

    Distenfeld, Carl H.; Klemish, Jr., Joseph R.

    1978-01-01

    This invention provides a simple, reliable, inexpensive and portable means and method for determining the thyroid dose rate of mixed airborne species of solid and gaseous radioiodine without requiring highly skilled personnel, such as health physicists or electronics technicians. To this end, this invention provides a means and method for sampling a gas from a source of a mixed species of solid and gaseous radioiodine for collection of the mixed species and readout and assessment of the emissions therefrom by cylindrically, concentrically and annularly molding the respective species around a cylindrical passage for receiving a conventional probe-type Geiger-Mueller radiation detector.

  10. A simple derivation for amplitude and time period of charged particles in an electrostatic bathtub potential

    NASA Astrophysics Data System (ADS)

    Prathap Reddy, K.

    2016-11-01

    An ‘electrostatic bathtub potential’ is defined and analytical expressions for the time period and amplitude of charged particles in this potential are obtained and compared with simulations. These kinds of potentials are encountered in linear electrostatic ion traps, where the potential along the axis appears like a bathtub. Ion traps are used in basic physics research and mass spectrometry to store ions; these stored ions make oscillatory motion within the confined volume of the trap. Usually these traps are designed and studied using ion optical software, but in this work the bathtub potential is reproduced by making two simple modifications to the harmonic oscillator potential. The addition of a linear ‘k 1|x|’ potential makes the simple harmonic potential curve steeper with a sharper turn at the origin, while the introduction of a finite-length zero potential region at the centre reproduces the flat region of the bathtub curve. This whole exercise of modelling a practical experimental situation in terms of a well-known simple physics problem may generate interest among readers.

  11. Linearization of the bradford protein assay.

    PubMed

    Ernst, Orna; Zor, Tsaffrir

    2010-04-12

    Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.

  12. Quantum monodromy and quantum phase transitions in floppy molecules

    NASA Astrophysics Data System (ADS)

    Larese, Danielle

    2012-10-01

    A simple algebraic Hamiltonian has been used to explore the vibrational and rotational spectra of the skeletal bending modes of HCNO, BrCNO, NCNCS, and other "floppy" (quasi-linear or quasi-bent) molecules. These molecules have large-amplitude, low-energy bending modes and champagne-bottle potential surfaces, making them good candidates for observing quantum phase transitions (QPT). We describe the geometric phase transitions from bent to linear in these and other non-rigid molecules, quantitatively analyzing the spectroscopic signatures of ground state QPT, excited state QPT, and quantum monodromy. The algebraic framework is ideal for this work because of its small calculational effort yet robust results. Although these methods have historically found success with tri-and four-atomic molecules, we now address five-atomic and simple branched molecules such as CH3NCO and GeH3NCO. Extraction of potential functions are completed for several molecules, resulting in predictions of barriers to linearity and equilibrium bond angles.

  13. Ball-morph: definition, implementation, and comparative evaluation.

    PubMed

    Whited, Brian; Rossignac, Jaroslaw Jarek

    2011-06-01

    We define b-compatibility for planar curves and propose three ball morphing techniques between pairs of b-compatible curves. Ball-morphs use the automatic ball-map correspondence, proposed by Chazal et al., from which we derive different vertex trajectories (linear, circular, and parabolic). All three morphs are symmetric, meeting both curves with the same angle, which is a right angle for the circular and parabolic. We provide simple constructions for these ball-morphs and compare them to each other and other simple morphs (linear-interpolation, closest-projection, curvature-interpolation, Laplace-blending, and heat-propagation) using six cost measures (travel-distance, distortion, stretch, local acceleration, average squared mean curvature, and maximum squared mean curvature). The results depend heavily on the input curves. Nevertheless, we found that the linear ball-morph has consistently the shortest travel-distance and the circular ball-morph has the least amount of distortion.

  14. A cooperation and competition based simple cell receptive field model and study of feed-forward linear and nonlinear contributions to orientation selectivity.

    PubMed

    Bhaumik, Basabi; Mathur, Mona

    2003-01-01

    We present a model for development of orientation selectivity in layer IV simple cells. Receptive field (RF) development in the model, is determined by diffusive cooperation and resource limited competition guided axonal growth and retraction in geniculocortical pathway. The simulated cortical RFs resemble experimental RFs. The receptive field model is incorporated in a three-layer visual pathway model consisting of retina, LGN and cortex. We have studied the effect of activity dependent synaptic scaling on orientation tuning of cortical cells. The mean value of hwhh (half width at half the height of maximum response) in simulated cortical cells is 58 degrees when we consider only the linear excitatory contribution from LGN. We observe a mean improvement of 22.8 degrees in tuning response due to the non-linear spiking mechanisms that include effects of threshold voltage and synaptic scaling factor.

  15. Optimal Facility Location Tool for Logistics Battle Command (LBC)

    DTIC Science & Technology

    2015-08-01

    64 Appendix B. VBA Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Appendix C. Story...should city planners have located emergency service facilities so that all households (the demand) had equal access to coverage?” The critical...programming language called Visual Basic for Applications ( VBA ). CPLEX is a commercial solver for linear, integer, and mixed integer linear programming problems

  16. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies

    ERIC Educational Resources Information Center

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-01-01

    Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…

  17. An Aptitude-Strategy Interaction in Linear Syllogistic Reading. Technical Report No. 15.

    ERIC Educational Resources Information Center

    Sternberg, Robert J.; Weil, Evelyn M.

    An aptitude-strategy interaction in linear syllogistic reasoning was tested on 144 undergraduate and graduate students of both sexes. It was hypothesized that the efficiency of each of four alternative strategies--control, visual, algorithmic, and mixed--would depend upon the subjects' pattern of verbal and spatial abilities. Two tests of verbal…

  18. An analysis of a large dataset on immigrant integration in Spain. The Statistical Mechanics perspective on Social Action

    NASA Astrophysics Data System (ADS)

    Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia

    2014-02-01

    How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.

  19. Linear thermal circulator based on Coriolis forces.

    PubMed

    Li, Huanan; Kottos, Tsampikos

    2015-02-01

    We show that the presence of a Coriolis force in a rotating linear lattice imposes a nonreciprocal propagation of the phononic heat carriers. Using this effect we propose the concept of Coriolis linear thermal circulator which can control the circulation of a heat current. A simple model of three coupled harmonic masses on a rotating platform permits us to demonstrate giant circulating rectification effects for moderate values of the angular velocities of the platform.

  20. [Analysis on the trend of long-term change of blood pressure in hypertensive patients treated with benazepril].

    PubMed

    Lu, Jun; Li, Li-Ming; He, Ping-Ping; Cao, Wei-Hua; Zhan, Si-Yan; Hu, Yong-Hua

    2004-06-01

    To introduce the application of mixed linear model in the analysis of secular trend of blood pressure under antihypertensive treatment. A community-based postmarketing surveillance of benazepril was conducted in 1831 essential hypertensive patients (age range from 35 to 88 years) in Shanghai. Data of blood pressure was analyzed every 3 months with mixed linear model to describe the secular trend of blood pressure and changes of age-specific and gender-specific. The changing trends of systolic blood pressure (SBP) and diastolic blood pressure (DBP) were found to fit the curvilinear models. A piecewise model was fit for pulse pressure (PP), i.e., curvilinear model in the first 9 months and linear model after 9 months of taking medication. Both blood pressure and its velocity gradually slowed down. There were significant variation for the curve parameters of intercept, slope, and acceleration. Blood pressure in patients with higher initial levels was persistently declining in the 3-year-treatment. However blood pressures of patients with relatively low initial levels remained low when dropped down to some degree. Elderly patients showed high SBP but low DBP, so as with higher PP. The velocity and sizes of blood pressure reductions increased with the initial level of blood pressure. Mixed linear model is flexible and robust when applied to the analysis of longitudinal data but with missing values and can also make the maximum use of available information.

Top