Sample records for model including non-zero

  1. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data

    PubMed Central

    Xu, Lizhen; Paterson, Andrew D.; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects. PMID:26148172

  3. Assessment and Selection of Competing Models for Zero-Inflated Microbiome Data.

    PubMed

    Xu, Lizhen; Paterson, Andrew D; Turpin, Williams; Xu, Wei

    2015-01-01

    Typical data in a microbiome study consist of the operational taxonomic unit (OTU) counts that have the characteristic of excess zeros, which are often ignored by investigators. In this paper, we compare the performance of different competing methods to model data with zero inflated features through extensive simulations and application to a microbiome study. These methods include standard parametric and non-parametric models, hurdle models, and zero inflated models. We examine varying degrees of zero inflation, with or without dispersion in the count component, as well as different magnitude and direction of the covariate effect on structural zeros and the count components. We focus on the assessment of type I error, power to detect the overall covariate effect, measures of model fit, and bias and effectiveness of parameter estimations. We also evaluate the abilities of model selection strategies using Akaike information criterion (AIC) or Vuong test to identify the correct model. The simulation studies show that hurdle and zero inflated models have well controlled type I errors, higher power, better goodness of fit measures, and are more accurate and efficient in the parameter estimation. Besides that, the hurdle models have similar goodness of fit and parameter estimation for the count component as their corresponding zero inflated models. However, the estimation and interpretation of the parameters for the zero components differs, and hurdle models are more stable when structural zeros are absent. We then discuss the model selection strategy for zero inflated data and implement it in a gut microbiome study of > 400 independent subjects.

  4. Zero modes of the non-relativistic self-dual Chern-Simons vortices on the Toda backgrounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Yongsung

    The two-dimensional self-dual equations are the governing equations of the static zero-energy vortex solutions for the non-relativistic, non-Abelian Chern-Simons models. The zero modes of the non-relativistic vortices are examined by index calculation for the self-dual equations. The index for the self-dual equations is zero for non-Abelian groups, but a non-zero index is obtained by the Toda Ansatz which reduces the self-dual equations to the Toda equations. The number of zero modes for the non-relativistic Toda vortices is 2 {Sigma}{sub {alpha},{beta}}{sup r}K{sub {alpha}{beta}}Q{sup {beta}} which is twice the total number of isolated zeros of the vortex functions. For the affine Todamore » system, there are additional adjoint zero modes which give a zero index for the SU(N) group.« less

  5. Predictors and moderators of outcomes of HIV/STD sex risk reduction interventions in substance abuse treatment programs: a pooled analysis of two randomized controlled trials.

    PubMed

    Crits-Christoph, Paul; Gallop, Robert; Sadicario, Jaclyn S; Markell, Hannah M; Calsyn, Donald A; Tang, Wan; He, Hua; Tu, Xin; Woody, George

    2014-01-16

    The objective of the current study was to examine predictors and moderators of response to two HIV sexual risk interventions of different content and duration for individuals in substance abuse treatment programs. Participants were recruited from community drug treatment programs participating in the National Institute on Drug Abuse Clinical Trials Network (CTN). Data were pooled from two parallel randomized controlled CTN studies (one with men and one with women) each examining the impact of a multi-session motivational and skills training program, in comparison to a single-session HIV education intervention, on the degree of reduction in unprotected sex from baseline to 3- and 6- month follow-ups. The findings were analyzed using a zero-inflated negative binomial (ZINB) model. Severity of drug use (p < .01), gender (p < .001), and age (p < .001) were significant main effect predictors of number of unprotected sexual occasions (USOs) at follow-up in the non-zero portion of the ZINB model (men, younger participants, and those with greater severity of drug/alcohol abuse have more USOs). Monogamous relationship status (p < .001) and race/ethnicity (p < .001) were significant predictors of having at least one USO vs. none (monogamous individuals and African Americans were more likely to have at least one USO). Significant moderators of intervention effectiveness included recent sex under the influence of drugs/alcohol (p < .01 in non-zero portion of model), duration of abuse of primary drug (p < .05 in non-zero portion of model), and Hispanic ethnicity (p < .01 in the zero portion, p < .05 in the non-zero portion of model). These predictor and moderator findings point to ways in which patients may be selected for the different HIV sexual risk reduction interventions and suggest potential avenues for further development of the interventions for increasing their effectiveness within certain subgroups.

  6. Quantum Quench Dynamics in the Transverse Field Ising Model at Non-zero Temperatures

    NASA Astrophysics Data System (ADS)

    Abeling, Nils; Kehrein, Stefan

    The recently discovered Dynamical Phase Transition denotes non-analytic behavior in the real time evolution of quantum systems in the thermodynamic limit and has been shown to occur in different systems at zero temperature [Heyl et al., Phys. Rev. Lett. 110, 135704 (2013)]. In this talk we present the extension of the analysis to non-zero temperature by studying a generalized form of the Loschmidt echo, the work distribution function, of a quantum quench in the transverse field Ising model. Although the quantitative behavior at non-zero temperatures still displays features derived from the zero temperature non-analyticities, it is shown that in this model dynamical phase transitions do not exist if T > 0 . This is a consequence of the system being initialized in a thermal state. Moreover, we elucidate how the Tasaki-Crooks-Jarzynski relation can be exploited as a symmetry relation for a global quench or to obtain the change of the equilibrium free energy density. This work was supported through CRC SFB 1073 (Project B03) of the Deutsche Forschungsgemeinschaft (DFG).

  7. On the assumption of vanishing temperature fluctuations at the wall for heat transfer modeling

    NASA Technical Reports Server (NTRS)

    Sommer, T. P.; So, R. M. C.; Zhang, H. S.

    1993-01-01

    Boundary conditions for fluctuating wall temperature are required for near-wall heat transfer modeling. However, their correct specifications for arbitrary thermal boundary conditions are not clear. The conventional approach is to assume zero fluctuating wall temperature or zero gradient for the temperature variance at the wall. These are idealized specifications and the latter condition could lead to an ill posed problem for fully-developed pipe and channel flows. In this paper, the validity and extent of the zero fluctuating wall temperature condition for heat transfer calculations is examined. The approach taken is to assume a Taylor expansion in the wall normal coordinate for the fluctuating temperature that is general enough to account for both zero and non-zero value at the wall. Turbulent conductivity is calculated from the temperature variance and its dissipation rate. Heat transfer calculations assuming both zero and non-zero fluctuating wall temperature reveal that the zero fluctuating wall temperature assumption is in general valid. The effects of non-zero fluctuating wall temperature are limited only to a very small region near the wall.

  8. Dielectric waveguide with transverse index variation that support a zero group velocity mode at a non-zero longitudinal wavevector

    DOEpatents

    Ibanescu, Mihai; Joannopoious, John D.; Fink, Yoel; Johnson, Steven G.; Fan, Shanhui

    2005-06-21

    Optical components including a laser based on a dielectric waveguide extending along a waveguide axis and having a refractive index cross-section perpendicular to the waveguide axis, the refractive index cross-section supporting an electromagnetic mode having a zero group velocity for a non-zero wavevector along the waveguide axis.

  9. Illustration of a Multilevel Model for Meta-Analysis

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Camilli, Gregory; Vargas, Sadako; Vernon, R. Fox

    2007-01-01

    In this article, the authors present a multilevel (or hierarchical linear) model that illustrates issues in the application of the model to data from meta-analytic studies. In doing so, several issues are discussed that typically arise in the course of a meta-analysis. These include the presence of non-zero between-study variability, how multiple…

  10. Zero-sulfur diesel fuel from non-petroleum resources : the key to reducing U.S. oil imports.

    DOT National Transportation Integrated Search

    2012-09-01

    Zero-sulfur diesel fuel of the highest quality, the fuel used in this project, can be made by Fischer-Tropsch (FT) synthesis from many non-petroleum resources, including natural gas, which is increasingly abundant in the United States. Zero-sulfur FT...

  11. Zero-Valent Metal Emulsion for Reductive Dehalogenation of DNAPLs

    NASA Technical Reports Server (NTRS)

    Reinhart, Debra R. (Inventor); Clausen, Christian (Inventor); Gelger, Cherie L. (Inventor); Quinn, Jacqueline (Inventor); Brooks, Kathleen (Inventor)

    2006-01-01

    A zero-valent metal emulsion is used to dehalogenate solvents, such as pooled dense non-aqueous phase liquids (DNAPLs), including trichloroethylene (TCE). The zero-valent metal emulsion contains zero-valent metal particles, a surfactant, oil and water, The preferred zero-valent metal particles are nanoscale and microscale zero-valent iron particles.

  12. Zero-Valent Metal Emulsion for Reductive Dehalogenation of DNAPLS

    NASA Technical Reports Server (NTRS)

    Reinhart, Debra R. (Inventor); Clausen, Christian (Inventor); Geiger, Cherie L. (Inventor); Quinn, Jacqueline (Inventor); Brooks, Kathleen (Inventor)

    2003-01-01

    A zero-valent metal emulsion is used to dehalogenate solvents, such as pooled dense non-aqueous phase liquids (DNAPLs), including trichloroethylene (TCE). The zero-valent metal emulsion contains zero-valent metal particles, a surfactant, oil and water. The preferred zero-valent metal particles are nanoscale and microscale zero-valent iron particles

  13. LETTER TO THE EDITOR: Bicomplexes and conservation laws in non-Abelian Toda models

    NASA Astrophysics Data System (ADS)

    Gueuvoghlanian, E. P.

    2001-08-01

    A bicomplex structure is associated with the Leznov-Saveliev equation of integrable models. The linear problem associated with the zero-curvature condition is derived in terms of the bicomplex linear equation. The explicit example of a non-Abelian conformal affine Toda model is discussed in detail and its conservation laws are derived from the zero-curvature representation of its equation of motion.

  14. A unified model of quarks and leptons with a universal texture zero

    NASA Astrophysics Data System (ADS)

    de Medeiros Varzielas, Ivo; Ross, Graham G.; Talbert, Jim

    2018-03-01

    We show that a universal texture zero in the (1,1) position of all fermionic mass matrices, including heavy right-handed Majorana neutrinos driving a type-I see-saw mechanism, can lead to a viable spectrum of mass, mixing and CP violation for both quarks and leptons, including (but not limited to) three important postdictions: the Cabibbo angle, the charged lepton masses, and the leptonic `reactor' angle. We model this texture zero with a non-Abelian discrete family symmetry that can easily be embedded in a grand unified framework, and discuss the details of the phenomenology after electroweak and family symmetry breaking. We provide an explicit numerical fit to the available data and obtain excellent agreement with the 18 observables in the charged fermion and neutrino sectors with just 9 free parameters. We further show that the vacua of our new scalar familon fields are readily aligned along desired directions in family space, and also demonstrate discrete gauge anomaly freedom at the relevant scale of our effective theory.

  15. Numerical simulation of transport processes in injection mold-filling during production of a cylindrical object under isothermal and non-isothermal conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, A.; Ghoshdastidar, P.S.

    1999-07-01

    In this paper, numerical simulation of injection mold-filling during the production of a cylindrical object under isothermal and non-isothermal conditions is presented. The material of the object is low density polyethylene (LDPE) following power-law viscosity model for non-zero shear rate zone. However, where shear rate becomes zero, zero-shear viscosity value has been used. Three cases have been considered, namely (1) Isothermal filling at constant injection pressure, (2) Isothermal filling at constant flow rate, and (3) Non-isothermal filling at constant flow rate. For the case-(3), the viscosity of LDPE is also a function of temperature. The material of the mold ismore » steel. For the non-isothermal filling, the concept of melt-mold thermal contact resistance coefficient has been incorporated in the model. The length and diameter of the body in all three cases have been taken as 0.254 m and 0.00508 m respectively. The finite-difference method has been used to solve the governing differential equations for the processes. The results show excellent agreement with the corresponding equations for the processes. The results show excellent agreement with the corresponding analytical solutions for the first two cases showing the correctness of the numerical method. The simulation results for non-isothermal filling show physically realistic trends and lend insight into various important aspects of mold-filling including frozen skin layer.« less

  16. Fluctuations and instabilities of a holographic metal

    NASA Astrophysics Data System (ADS)

    Jokela, Niko; Järvinen, Matti; Lippert, Matthew

    2013-02-01

    We analyze the quasinormal modes of the D2-D8' model of 2+1-dimensional, strongly-coupled, charged fermions in a background magnetic field and at non-zero density. The model is known to include a quantum Hall phase with integer filling fraction. As expected, we find a hydrodynamical diffusion mode at small momentum and the nonzero-temperature holographic zero sound, which becomes massive above a critical magnetic field. We confirm the previously-known thermodynamic instability. In addition, we discover an instability at low temperature, large mass, and in a charge density and magnetic field range near the quantum Hall phase to an inhomogeneous striped phase.

  17. Representation of the Numerosity 'zero' in the Parietal Cortex of the Monkey.

    PubMed

    Okuyama, Sumito; Kuki, Toshinobu; Mushiake, Hajime

    2015-05-22

    Zero is a fundamental concept in mathematics and modern science. Empty sets are considered a precursor of the concept of numerosity zero and a part of numerical continuum. How is numerosity zero (the absence of visual items) represented in the primate cortex? To address this question, we trained monkeys to perform numerical operations including numerosity zero. Here we show a group of neurons in the posterior parietal cortex of the monkey activated in response to numerosity 'zero'. 'Zero' neurons are classified into exclusive and continuous types; the exclusive type discretely encodes numerical absence and the continuous type encodes numerical absence as a part of a numerical continuum. "Numerosity-zero" neurons enhance behavioral discrimination of not only zero numerosity but also non-zero numerosities. Representation of numerosity zero in the parietal cortex may be a precursor of non-verbal concept of zero in primates.

  18. Zero-sum politics, the Herbert thesis, and the Ryan White CARE Act: lessons learned from the local side of AIDS.

    PubMed

    Slack, J

    2001-01-01

    This study examines the dynamics of grass-roots decision-making processes involved in the implementation of the Ryan White CARE Act. Providing social services to persons with HIV/AIDS, the CARE act requires participation of all relevant groups, including representatives of the HIV/AIDS and gay communities. Decision-making behavior is explored by applying a political (zero-sum) model and a bureaucratic (the Herbert Thesis) model. Using qualitative research techniques, the Kern County (California) Consortium is used as a case study. Findings shed light on the decision-making behavior of social service organizations characterized by intense advocacy and structured on the basis of volunteerism and non-hierarchical relationships. Findings affirm bureaucratic behavior predicted by the Herbert Thesis and also discern factors which seem to trigger more conflictual zero-sum behavior.

  19. Mathematical model for the contribution of individual organs to non-zero y-intercepts in single and multi-compartment linear models of whole-body energy expenditure.

    PubMed

    Kaiyala, Karl J

    2014-01-01

    Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit 'local linearity.' Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying 'latent' allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses.

  20. Algebraic method for parameter identification of circuit models for batteries under non-zero initial condition

    NASA Astrophysics Data System (ADS)

    Devarakonda, Lalitha; Hu, Tingshu

    2014-12-01

    This paper presents an algebraic method for parameter identification of Thevenin's equivalent circuit models for batteries under non-zero initial condition. In traditional methods, it was assumed that all capacitor voltages have zero initial conditions at the beginning of each charging/discharging test. This would require a long rest time between two tests, leading to very lengthy tests for a charging/discharging cycle. In this paper, we propose an algebraic method which can extract the circuit parameters together with initial conditions. This would theoretically reduce the rest time to 0 and substantially accelerate the testing cycles.

  1. Pole-zero form fractional model identification in frequency domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mansouri, R.; Djamah, T.; Djennoune, S.

    2009-03-05

    This paper deals with system identification in the frequency domain using non integer order models given in the pole-zero form. The usual identification techniques cannot be used in this case because of the non integer orders of differentiation which makes the problem strongly nonlinear. A general identification method based on Levenberg-Marquardt algorithm is developed and allows to estimate the (2n+2m+1) parameters of the model. Its application to identify the ''skin effect'' of a squirrel cage induction machine modeling is then presented.

  2. Zero-state Markov switching count-data models: an empirical assessment.

    PubMed

    Malyshkina, Nataliya V; Mannering, Fred L

    2010-01-01

    In this study, a two-state Markov switching count-data model is proposed as an alternative to zero-inflated models to account for the preponderance of zeros sometimes observed in transportation count data, such as the number of accidents occurring on a roadway segment over some period of time. For this accident-frequency case, zero-inflated models assume the existence of two states: one of the states is a zero-accident count state, which has accident probabilities that are so low that they cannot be statistically distinguished from zero, and the other state is a normal-count state, in which counts can be non-negative integers that are generated by some counting process, for example, a Poisson or negative binomial. While zero-inflated models have come under some criticism with regard to accident-frequency applications - one fact is undeniable - in many applications they provide a statistically superior fit to the data. The Markov switching approach we propose seeks to overcome some of the criticism associated with the zero-accident state of the zero-inflated model by allowing individual roadway segments to switch between zero and normal-count states over time. An important advantage of this Markov switching approach is that it allows for the direct statistical estimation of the specific roadway-segment state (i.e., zero-accident or normal-count state) whereas traditional zero-inflated models do not. To demonstrate the applicability of this approach, a two-state Markov switching negative binomial model (estimated with Bayesian inference) and standard zero-inflated negative binomial models are estimated using five-year accident frequencies on Indiana interstate highway segments. It is shown that the Markov switching model is a viable alternative and results in a superior statistical fit relative to the zero-inflated models.

  3. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol' method. A formal statistical test validates these parameter screening results. Based on the dummy parameter screening, 11 model parameters are identified as influential. Therefore, it can be denoted that the "dummy parameter approach" can facilitate the parameter screening process and provide guidance for GSA users to define a screening-threshold, with only limited additional resources. Key words: Parameter screening, Global sensitivity analysis, Dummy parameter, Variance-based method, Moment-independent method

  4. Particle Filtering Methods for Incorporating Intelligence Updates

    DTIC Science & Technology

    2017-03-01

    methodology for incorporating intelligence updates into a stochastic model for target tracking. Due to the non -parametric assumptions of the PF...samples are taken with replacement from the remaining non -zero weighted particles at each iteration. With this methodology , a zero-weighted particle is...incorporation of information updates. A common method for incorporating information updates is Kalman filtering. However, given the probable nonlinear and non

  5. Representation of the Numerosity ‘zero’ in the Parietal Cortex of the Monkey

    PubMed Central

    Okuyama, Sumito; Kuki, Toshinobu; Mushiake, Hajime

    2015-01-01

    Zero is a fundamental concept in mathematics and modern science. Empty sets are considered a precursor of the concept of numerosity zero and a part of numerical continuum. How is numerosity zero (the absence of visual items) represented in the primate cortex? To address this question, we trained monkeys to perform numerical operations including numerosity zero. Here we show a group of neurons in the posterior parietal cortex of the monkey activated in response to numerosity ‘zero’. ‘Zero’ neurons are classified into exclusive and continuous types; the exclusive type discretely encodes numerical absence and the continuous type encodes numerical absence as a part of a numerical continuum. “Numerosity-zero” neurons enhance behavioral discrimination of not only zero numerosity but also non-zero numerosities. Representation of numerosity zero in the parietal cortex may be a precursor of non-verbal concept of zero in primates. PMID:25989598

  6. Equilibrium Wall Model Implementation in a Nodal Finite Element Flow Solver JENRE for Large Eddy Simulations

    DTIC Science & Technology

    2017-11-13

    condition is applied to the inviscid and viscous fluxes on the wall to satisfy the surface physical condition, but a non -zero surface tangential...velocity profiles and turbulence quantities predicted by the current wall-model implementation agree well with available experimental data and...implementations. The volume and surface integrals based on the non -zero surface velocity in a cell adjacent to the wall show a good agreement with those

  7. Cooling in reduced period optical lattices: Non-zero Raman detuning

    NASA Astrophysics Data System (ADS)

    Malinovsky, V. S.; Berman, P. R.

    2006-08-01

    In a previous paper [Phys. Rev. A 72 (2005) 033415], it was shown that sub-Doppler cooling occurs in a standing-wave Raman scheme (SWRS) that can lead to reduced period optical lattices. These calculations are extended to allow for non-zero detuning of the Raman transitions. New physical phenomena are encountered, including cooling to non-zero velocities, combinations of Sisyphus and "corkscrew" polarization cooling, and somewhat unusual origins of the friction force. The calculations are carried out in a semi-classical approximation and a dressed state picture is introduced to aid in the interpretation of the results.

  8. Deformation of an Elastic Substrate Due to a Resting Sessile Droplet

    NASA Astrophysics Data System (ADS)

    Bardall, Aaron; Daniels, Karen; Shearer, Michael

    2017-11-01

    On a sufficiently soft substrate, a resting fluid droplet will cause significant deformation of the substrate. This deformation is driven by a combination of capillary forces at the contact line and the fluid pressure at the solid surface. These forces are balanced at the surface by the solid traction stress induced by the substrate deformation. Young's Law, which predicts the equilibrium contact angle of the droplet, also indicates an a priori radial force balance for rigid substrates, but not necessarily for soft substrates which deform under loading. It remains an open question whether the contact line transmits a non-zero force tangent to the substrate surface in addition to the conventional normal force. This talk will present a model for the static deformation of the substrate that includes a non-zero tangential contact line force as well as general interfacial energy conditions governing the angle of a two-dimensional droplet. We discuss extensions of this model to non-symmetric droplets and their effect on the static configuration of the droplet/substrate system. NSF #DMS-1517291.

  9. Mathematical Model for the Contribution of Individual Organs to Non-Zero Y-Intercepts in Single and Multi-Compartment Linear Models of Whole-Body Energy Expenditure

    PubMed Central

    Kaiyala, Karl J.

    2014-01-01

    Mathematical models for the dependence of energy expenditure (EE) on body mass and composition are essential tools in metabolic phenotyping. EE scales over broad ranges of body mass as a non-linear allometric function. When considered within restricted ranges of body mass, however, allometric EE curves exhibit ‘local linearity.’ Indeed, modern EE analysis makes extensive use of linear models. Such models typically involve one or two body mass compartments (e.g., fat free mass and fat mass). Importantly, linear EE models typically involve a non-zero (usually positive) y-intercept term of uncertain origin, a recurring theme in discussions of EE analysis and a source of confounding in traditional ratio-based EE normalization. Emerging linear model approaches quantify whole-body resting EE (REE) in terms of individual organ masses (e.g., liver, kidneys, heart, brain). Proponents of individual organ REE modeling hypothesize that multi-organ linear models may eliminate non-zero y-intercepts. This could have advantages in adjusting REE for body mass and composition. Studies reveal that individual organ REE is an allometric function of total body mass. I exploit first-order Taylor linearization of individual organ REEs to model the manner in which individual organs contribute to whole-body REE and to the non-zero y-intercept in linear REE models. The model predicts that REE analysis at the individual organ-tissue level will not eliminate intercept terms. I demonstrate that the parameters of a linear EE equation can be transformed into the parameters of the underlying ‘latent’ allometric equation. This permits estimates of the allometric scaling of EE in a diverse variety of physiological states that are not represented in the allometric EE literature but are well represented by published linear EE analyses. PMID:25068692

  10. Cell Model Of A Disordered Solid

    NASA Technical Reports Server (NTRS)

    Peng, Steven T. J.; Landel, Robert F.; Moacanin, Jovan; Simha, Robert; Papazoglou, Elizabeth

    1990-01-01

    Elastic properties predicted from first principles. Paper discusses generalization of cell theory of disordered (non-crystaline) solid to include anisotropic stresses. Study part of continuing effort to understand macroscopic stress-and-strain properties of solid materials in terms of microscopic physical phenomena. Emphasis on derivation, from first principles, of bulk, shear, and Young's moduli of glassy material at zero absolute temperature.

  11. Computational knee ligament modeling using experimentally determined zero-load lengths.

    PubMed

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeh, Sonia; Yang, Christopher; Gibbs, Michael

    California aims to reduce greenhouse gas (GHG) emissions to 40% below 1990 levels by 2030. We compare six energy models that have played various roles in informing the state policymakers in setting climate policy goals and targets. These models adopt a range of modeling structures, including stock-turnover back-casting models, a least-cost optimization model, macroeconomic/macro-econometric models, and an electricity dispatch model. Results from these models provide useful insights in terms of the transformations in the energy system required, including efficiency improvements in cars, trucks, and buildings, electrification of end-uses, low- or zero-carbon electricity and fuels, aggressive adoptions of zero-emission vehicles (ZEVs),more » demand reduction, and large reductions of non-energy GHG emissions. Some of these studies also suggest that the direct economic costs can be fairly modest or even generate net savings, while the indirect macroeconomic benefits are large, as shifts in employment and capital investments could have higher economic returns than conventional energy expenditures. These models, however, often assume perfect markets, perfect competition, and zero transaction costs. They also do not provide specific policy guidance on how these transformative changes can be achieved. Greater emphasis on modeling uncertainty, consumer behaviors, heterogeneity of impacts, and spatial modeling would further enhance policymakers' ability to design more effective and targeted policies. Here, this paper presents an example of how policymakers, energy system modelers and stakeholders interact and work together to develop and evaluate long-term state climate policy targets. Lastly, even though this paper focuses on California, the process of dialogue and interactions, modeling results, and lessons learned can be generally adopted across different regions and scales.« less

  13. Zero-Valent Metallic Treatment System and Its Application for Removal and Remediation of Polychlorinated Biphenyls (Pcbs)

    NASA Technical Reports Server (NTRS)

    Clausen, Christian A. (Inventor); Geiger, Cherie L. (Inventor); Quinn, Jacqueline W. (Inventor); Brooks, Kathleen B. (Inventor)

    2012-01-01

    PCBs are removed from contaminated media using a treatment system including zero-valent metal particles and an organic hydrogen donating solvent. The treatment system may include a weak acid in order to eliminate the need for a coating of catalytic noble metal on the zero-valent metal particles. If catalyzed zero-valent metal particles are used, the treatment system may include an organic hydrogen donating solvent that is a non-water solvent. The treatment system may be provided as a "paste-like" system that is preferably applied to natural media and ex-situ structures to eliminate PCBs.

  14. THE LITTLEST HIGGS MODEL AND ONE-LOOP ELECTROWEAK PRECISION CONSTRAINTS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    CHEN, M.C.; DAWSON,S.

    2004-06-16

    We present in this talk the one-loop electroweak precision constraints in the Littlest Higgs model, including the logarithmically enhanced contributions from both fermion and scalar loops. We find the one-loop contributions are comparable to the tree level corrections in some regions of parameter space. A low cutoff scale is allowed for a non-zero triplet VEV. Constraints on various other parameters in the model are also discussed. The role of triplet scalars in constructing a consistent renormalization scheme is emphasized.

  15. Integrable Time-Dependent Quantum Hamiltonians

    NASA Astrophysics Data System (ADS)

    Sinitsyn, Nikolai A.; Yuzbashyan, Emil A.; Chernyak, Vladimir Y.; Patra, Aniket; Sun, Chen

    2018-05-01

    We formulate a set of conditions under which the nonstationary Schrödinger equation with a time-dependent Hamiltonian is exactly solvable analytically. The main requirement is the existence of a non-Abelian gauge field with zero curvature in the space of system parameters. Known solvable multistate Landau-Zener models satisfy these conditions. Our method provides a strategy to incorporate time dependence into various quantum integrable models while maintaining their integrability. We also validate some prior conjectures, including the solution of the driven generalized Tavis-Cummings model.

  16. Zero-crossing statistics for non-Markovian time series.

    PubMed

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  17. Zero-crossing statistics for non-Markovian time series

    NASA Astrophysics Data System (ADS)

    Nyberg, Markus; Lizana, Ludvig; Ambjörnsson, Tobias

    2018-03-01

    In applications spanning from image analysis and speech recognition to energy dissipation in turbulence and time-to failure of fatigued materials, researchers and engineers want to calculate how often a stochastic observable crosses a specific level, such as zero. At first glance this problem looks simple, but it is in fact theoretically very challenging, and therefore few exact results exist. One exception is the celebrated Rice formula that gives the mean number of zero crossings in a fixed time interval of a zero-mean Gaussian stationary process. In this study we use the so-called independent interval approximation to go beyond Rice's result and derive analytic expressions for all higher-order zero-crossing cumulants and moments. Our results agree well with simulations for the non-Markovian autoregressive model.

  18. Lectures on Non-Abelian Bosonization

    NASA Astrophysics Data System (ADS)

    Tsvelik, A. M.

    The following sections are included: * Introduction * Kac-Moody algebra * Conformal embedding. Sugawara Hamiltonian * SU(N)×SU(M) model * From the fermionic to WZNW model * The perturbed SUk(2) WZNW model * Correlation functions and Quasi Long Range order * Generalization from SU(2) to SU(N) * A model with Sp(2N) symmetry * Solution for the special case gcdw = gsc * Attraction in the orbital channel. Competing orders. Emergent integrability. ZN parafermions. * Parafermion zero modes * Conclusions and Acknowledgements * Appendix A. TBA equations for the Sp1(2N) model * Appendix B. Bosonization of of Z4 parafermions * References

  19. ModABa Model: Annual Flow Duration Curves Assessment in Ephemeral Basins

    NASA Astrophysics Data System (ADS)

    Pumo, Dario; Viola, Francesco; Noto, Leonardo V.

    2013-04-01

    A representation of the streamflow regime for a river basin is required for a variety of hydrological analyses and engineering applications, from the water resource allocation and utilization to the environmental flow management. The flow duration curve (FDC) represents a comprehensive signature of temporal runoff variability often used to synthesize catchment rainfall-runoff responses. Several models aimed to the theoretical reconstruction of the FDC have been recently developed under different approaches, and a relevant scientific knowledge specific to this topic has been already acquired. In this work, a new model for the probabilistic characterization of the daily streamflows in perennial and ephemeral catchments is introduced. The ModABa model (MODel for Annual flow duration curves assessment in intermittent BAsins) can be thought as a wide mosaic whose tesserae are frameworks, models or conceptual schemes separately developed in different recent studies. Such tesserae are harmoniously placed and interconnected, concurring together towards a unique final aim that is the reproduction of the FDC of daily streamflows in a river basin. Two separated periods within the year are firstly identified: a non-zero period, typically characterized by significant streamflows, and a dry period, that, in the cases of ephemeral basins, is the period typically characterized by absence of streamflow. The proportion of time the river is dry, providing an estimation of the probability of zero flow occurring, is empirically estimated. Then, an analysis concerning the non-zero period is performed, considering the streamflow disaggregated into a slow subsuperficial component and a fast superficial component. A recent analytical model is adopted to derive the non zero FDC relative to the subsuperficial component; this last is considered to be generated by the soil water excess over the field capacity in the permeable portion of the basin. The non zero FDC relative to the fast streamflow component is directly derived from the precipitation duration curve through a simple filter model. The fast component of streamflow is considered to be formed by two contributions that are the entire amount of rainfall falling onto the impervious portion of the basin and the excess of rainfall over a fixed threshold, defining heavy rain events, falling onto the permeable portion. The two obtained FDCs are then overlapped, providing a unique non-zero FDC relative to the total streamflow. Finally, once the probability that the river is dry and the non zero FDC are known, the annual FDC of the daily total streamflow is derived applying the theory of total probability. The model is calibrated on a small catchment with ephemeral streamflows using a long period of daily precipitation, temperature and streamflow measurements, and it is successively validated in the same basin using two different time periods. The high model performances obtained in both the validation periods, demonstrate how the model, once calibrated, is able to accurately reproduce the empirical FDC starting from easily derivable parameters arising from a basic ecohydrological knowledge of the basin and commonly available climatic data such as daily precipitation and temperatures. In this sense, the model reveals itself as a valid tool for streamflow predictions in ungauged basins.

  20. Marginalized zero-altered models for longitudinal count data.

    PubMed

    Tabb, Loni Philip; Tchetgen, Eric J Tchetgen; Wellenius, Greg A; Coull, Brent A

    2016-10-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias.

  1. Marginalized zero-altered models for longitudinal count data

    PubMed Central

    Tabb, Loni Philip; Tchetgen, Eric J. Tchetgen; Wellenius, Greg A.; Coull, Brent A.

    2015-01-01

    Count data often exhibit more zeros than predicted by common count distributions like the Poisson or negative binomial. In recent years, there has been considerable interest in methods for analyzing zero-inflated count data in longitudinal or other correlated data settings. A common approach has been to extend zero-inflated Poisson models to include random effects that account for correlation among observations. However, these models have been shown to have a few drawbacks, including interpretability of regression coefficients and numerical instability of fitting algorithms even when the data arise from the assumed model. To address these issues, we propose a model that parameterizes the marginal associations between the count outcome and the covariates as easily interpretable log relative rates, while including random effects to account for correlation among observations. One of the main advantages of this marginal model is that it allows a basis upon which we can directly compare the performance of standard methods that ignore zero inflation with that of a method that explicitly takes zero inflation into account. We present simulations of these various model formulations in terms of bias and variance estimation. Finally, we apply the proposed approach to analyze toxicological data of the effect of emissions on cardiac arrhythmias. PMID:27867423

  2. Boundary asymptotics for a non-neutral electrochemistry model with small Debye length

    NASA Astrophysics Data System (ADS)

    Lee, Chiun-Chang; Ryham, Rolf J.

    2018-04-01

    This article addresses the boundary asymptotics of the electrostatic potential in non-neutral electrochemistry models with small Debye length in bounded domains. Under standard physical assumptions motivated by non-electroneutral phenomena in oxidation-reduction reactions, we show that the electrostatic potential asymptotically blows up at boundary points with respect to the bulk reference potential as the scaled Debye length tends to zero. The analysis gives a lower bound for the blow-up rate with respect to the model parameters. Moreover, the maximum potential difference over any compact subset of the physical domain vanishes exponentially in the zero-Debye-length limit. The results mathematically confirm the physical description that electrolyte solutions are electrically neutral in the bulk and are strongly electrically non-neutral near charged surfaces.

  3. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  4. Computational Knee Ligament Modeling Using Experimentally Determined Zero-Load Lengths

    PubMed Central

    Bloemker, Katherine H; Guess, Trent M; Maletsky, Lorin; Dodd, Kevin

    2012-01-01

    This study presents a subject-specific method of determining the zero-load lengths of the cruciate and collateral ligaments in computational knee modeling. Three cadaver knees were tested in a dynamic knee simulator. The cadaver knees also underwent manual envelope of motion testing to find their passive range of motion in order to determine the zero-load lengths for each ligament bundle. Computational multibody knee models were created for each knee and model kinematics were compared to experimental kinematics for a simulated walk cycle. One-dimensional non-linear spring damper elements were used to represent cruciate and collateral ligament bundles in the knee models. This study found that knee kinematics were highly sensitive to altering of the zero-load length. The results also suggest optimal methods for defining each of the ligament bundle zero-load lengths, regardless of the subject. These results verify the importance of the zero-load length when modeling the knee joint and verify that manual envelope of motion measurements can be used to determine the passive range of motion of the knee joint. It is also believed that the method described here for determining zero-load length can be used for in vitro or in vivo subject-specific computational models. PMID:22523522

  5. Life Goals Matter to Happiness: A Revision of Set-Point Theory

    ERIC Educational Resources Information Center

    Headey, Bruce

    2008-01-01

    Using data from the long-running German Socio-Economic Panel Survey (SOEP), this paper provides evidence that life goals matter substantially to subjective well-being (SWB). Non-zero sum goals, which include commitment to family, friends and social and political involvement, promote life satisfaction. Zero sum goals, including commitment to career…

  6. Robustification and Optimization in Repetitive Control For Minimum Phase and Non-Minimum Phase Systems

    NASA Astrophysics Data System (ADS)

    Prasitmeeboon, Pitcha

    Repetitive control (RC) is a control method that specifically aims to converge to zero tracking error of a control systems that execute a periodic command or have periodic disturbances of known period. It uses the error of one period back to adjust the command in the present period. In theory, RC can completely eliminate periodic disturbance effects. RC has applications in many fields such as high-precision manufacturing in robotics, computer disk drives, and active vibration isolation in spacecraft. The first topic treated in this dissertation develops several simple RC design methods that are somewhat analogous to PID controller design in classical control. From the early days of digital control, emulation methods were developed based on a Forward Rule, a Backward Rule, Tustin's Formula, a modification using prewarping, and a pole-zero mapping method. These allowed one to convert a candidate controller design to discrete time in a simple way. We investigate to what extent they can be used to simplify RC design. A particular design is developed from modification of the pole-zero mapping rules, which is simple and sheds light on the robustness of repetitive control designs. RC convergence requires less than 90 degree model phase error at all frequencies up to Nyquist. A zero-phase cutoff filter is normally used to robustify to high frequency model error when this limit is exceeded. The result is stabilization at the expense of failure to cancel errors above the cutoff. The second topic investigates a series of methods to use data to make real time updates of the frequency response model, allowing one to increase or eliminate the frequency cutoff. These include the use of a moving window employing a recursive discrete Fourier transform (DFT), and use of a real time projection algorithm from adaptive control for each frequency. The results can be used directly to make repetitive control corrections that cancel each error frequency, or they can be used to update a repetitive control FIR compensator. The aim is to reduce the final error level by using real time frequency response model updates to successively increase the cutoff frequency, each time creating the improved model needed to produce convergence zero error up to the higher cutoff. Non-minimum phase systems present a difficult design challenge to the sister field of Iterative Learning Control. The third topic investigates to what extent the same challenges appear in RC. One challenge is that the intrinsic non-minimum phase zero mapped from continuous time is close to the pole of repetitive controller at +1 creating behavior similar to pole-zero cancellation. The near pole-zero cancellation causes slow learning at DC and low frequencies. The Min-Max cost function over the learning rate is presented. The Min-Max can be reformulated as a Quadratically Constrained Linear Programming problem. This approach is shown to be an RC design approach that addresses the main challenge of non-minimum phase systems to have a reasonable learning rate at DC. Although it was illustrated that using the Min-Max objective improves learning at DC and low frequencies compared to other designs, the method requires model accuracy at high frequencies. In the real world, models usually have error at high frequencies. The fourth topic addresses how one can merge the quadratic penalty to the Min-Max cost function to increase robustness at high frequencies. The topic also considers limiting the Min-Max optimization to some frequencies interval and applying an FIR zero-phase low-pass filter to cutoff the learning for frequencies above that interval.

  7. Silver Creek: A Study of Stream Velocities and Erosion along the Ohio River near Clarksville, Indiana; McAlpine Lock and Dam Numerical Model

    DTIC Science & Technology

    2018-04-12

    split between the upper and lower gates, the tainter gate outflow can cause flow circulations or eddies to form , which requires the use of a multi...determined to not erode were assigned a bed layer thickness of zero. This included the stone weir, fossil beds, non-erodible vegetation, and upstream...606.7 Chute 0.1 606 L 0.4 Erodible Small Vegetation 606.7 Chute 0.1 606 L 0.4 Fossil Bed NA 0 NA 0 Non Erodible Small Vegetation NA 0 NA 0 Non

  8. A modeling comparison of deep greenhouse gas emissions reduction scenarios by 2030 in California

    DOE PAGES

    Yeh, Sonia; Yang, Christopher; Gibbs, Michael; ...

    2016-10-21

    California aims to reduce greenhouse gas (GHG) emissions to 40% below 1990 levels by 2030. We compare six energy models that have played various roles in informing the state policymakers in setting climate policy goals and targets. These models adopt a range of modeling structures, including stock-turnover back-casting models, a least-cost optimization model, macroeconomic/macro-econometric models, and an electricity dispatch model. Results from these models provide useful insights in terms of the transformations in the energy system required, including efficiency improvements in cars, trucks, and buildings, electrification of end-uses, low- or zero-carbon electricity and fuels, aggressive adoptions of zero-emission vehicles (ZEVs),more » demand reduction, and large reductions of non-energy GHG emissions. Some of these studies also suggest that the direct economic costs can be fairly modest or even generate net savings, while the indirect macroeconomic benefits are large, as shifts in employment and capital investments could have higher economic returns than conventional energy expenditures. These models, however, often assume perfect markets, perfect competition, and zero transaction costs. They also do not provide specific policy guidance on how these transformative changes can be achieved. Greater emphasis on modeling uncertainty, consumer behaviors, heterogeneity of impacts, and spatial modeling would further enhance policymakers' ability to design more effective and targeted policies. Here, this paper presents an example of how policymakers, energy system modelers and stakeholders interact and work together to develop and evaluate long-term state climate policy targets. Lastly, even though this paper focuses on California, the process of dialogue and interactions, modeling results, and lessons learned can be generally adopted across different regions and scales.« less

  9. Zero adjusted models with applications to analysing helminths count data.

    PubMed

    Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N

    2014-11-27

    It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.

  10. Zero inflation in ordinal data: Incorporating susceptibility to response through the use of a mixture model

    PubMed Central

    Kelley, Mary E.; Anderson, Stewart J.

    2008-01-01

    Summary The aim of the paper is to produce a methodology that will allow users of ordinal scale data to more accurately model the distribution of ordinal outcomes in which some subjects are susceptible to exhibiting the response and some are not (i.e., the dependent variable exhibits zero inflation). This situation occurs with ordinal scales in which there is an anchor that represents the absence of the symptom or activity, such as “none”, “never” or “normal”, and is particularly common when measuring abnormal behavior, symptoms, and side effects. Due to the unusually large number of zeros, traditional statistical tests of association can be non-informative. We propose a mixture model for ordinal data with a built-in probability of non-response that allows modeling of the range (e.g., severity) of the scale, while simultaneously modeling the presence/absence of the symptom. Simulations show that the model is well behaved and a likelihood ratio test can be used to choose between the zero-inflated and the traditional proportional odds model. The model, however, does have minor restrictions on the nature of the covariates that must be satisfied in order for the model to be identifiable. The method is particularly relevant for public health research such as large epidemiological surveys where more careful documentation of the reasons for response may be difficult. PMID:18351711

  11. Bayesian evidence for non-zero θ 13 and CP-violation in neutrino oscillations

    NASA Astrophysics Data System (ADS)

    Bergström, Johannes

    2012-08-01

    We present the Bayesian method for evaluating the evidence for a non-zero value of the leptonic mixing angle θ 13 and CP-violation in neutrino oscillation experiments. This is an application of the well-established method of Bayesian model selection, of which we give a concise and pedagogical overview. When comparing the hypothesis θ 13 = 0 with hypotheses where θ 13 > 0 using global data but excluding the recent reactor measurements, we obtain only a weak preference for a non-zero θ 13, even though the significance is over 3 σ. We then add the reactor measurements one by one and show how the evidence for θ 13 > 0 quickly increases. When including the D ouble C hooz, D aya B ay, and RENO data, the evidence becomes overwhelming with a posterior probability of the hypothesis θ 13 = 0 below 10-11. Owing to the small amount of information on the CP-phase δ, very similar evidences are obtained for the CP-conserving and CP-violating hypotheses. Hence, there is, not unexpectedly, neither evidence for nor against leptonic CP-violation. However, when future experiments aiming to search for CP-violation have started taking data, this question will be of great importance and the method described here can be used as an important complement to standard analyses.

  12. Exact master equation and non-Markovian decoherence dynamics of Majorana zero modes under gate-induced charge fluctuations

    NASA Astrophysics Data System (ADS)

    Lai, Hon-Lam; Yang, Pei-Yun; Huang, Yu-Wei; Zhang, Wei-Min

    2018-02-01

    In this paper, we use the exact master equation approach to investigate the decoherence dynamics of Majorana zero modes in the Kitaev model, a 1D p -wave spinless topological superconducting chain (TSC) that is disturbed by gate-induced charge fluctuations. The exact master equation is derived by extending Feynman-Vernon influence functional technique to fermionic open systems involving pairing excitations. We obtain the exact master equation for the zero-energy Bogoliubov quasiparticle (bogoliubon) in the TSC, and then transfer it into the master equation for the Majorana zero modes. Within this exact master equation formalism, we can describe in detail the non-Markovian decoherence dynamics of the zero-energy bogoliubon as well as Majorana zero modes under local perturbations. We find that at zero temperature, local charge fluctuations induce level broadening to one of the Majorana zero modes but there is an isolated peak (localized bound state) located at zero energy that partially protects the Majorana zero mode from decoherence. At finite temperatures, the zero-energy localized bound state does not precisely exist, but the coherence of the Majorana zero mode can still be partially but weakly protected, due to the sharp dip of the spectral density near the zero frequency. The decoherence will be enhanced as one increases the charge fluctuations and/or the temperature of the gate.

  13. Marginalized zero-inflated negative binomial regression with application to dental caries

    PubMed Central

    Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon

    2015-01-01

    The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034

  14. Statistical procedures for analyzing mental health services data.

    PubMed

    Elhai, Jon D; Calhoun, Patrick S; Ford, Julian D

    2008-08-15

    In mental health services research, analyzing service utilization data often poses serious problems, given the presence of substantially skewed data distributions. This article presents a non-technical introduction to statistical methods specifically designed to handle the complexly distributed datasets that represent mental health service use, including Poisson, negative binomial, zero-inflated, and zero-truncated regression models. A flowchart is provided to assist the investigator in selecting the most appropriate method. Finally, a dataset of mental health service use reported by medical patients is described, and a comparison of results across several different statistical methods is presented. Implications of matching data analytic techniques appropriately with the often complexly distributed datasets of mental health services utilization variables are discussed.

  15. Prediction of Osmotic Pressure of Ionic Liquids Inside a Nanoslit by MD Simulation and Continuum Approach

    NASA Astrophysics Data System (ADS)

    Moon, Gi Jong; Yang, Yu Dong; Oh, Jung Min; Kang, In Seok

    2017-11-01

    Osmotic pressure plays an important role in the processes of charging and discharging of lithium batteries. In this work, osmotic pressure of the ionic liquids confined inside a nanoslit is calculated by using both MD simulation and continuum approach. In the case of MD simulation, an ionic liquid is modeled as singly charged spheres with a short-ranged repulsive Lennard-Jones potential. The radii of the spheres are 0.5nm, reflecting the symmetry of ion sizes for simplicity. The simulation box size is 11nm×11nm×7.5nm with 1050 ion pairs. The concentration of ionic liquid is about 1.922mol/L, and the total charge on an individual wall varies from +/-60e(7.944 μm/cm2) to +/-600e(79.44 μm/cm2) . In the case of continuum approach, we classify the problems according to the correlation length and steric factor, and considered the four separate cases: 1) zero correlation length and zero steric factor, 2) zero correlation length and non-zero steric factor, 3) non-zero correlation length and zero steric factor, and 4) non-zero correlation and non-zero steric factor. Better understanding of the osmotic pressure of ionic liquids confined inside a nanoslit can be achieved by comparing the results of MD simulation and continuum approach. This research was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIP: Ministry of Science, ICT & Future Planning) (No. 2017R1D1A1B05035211).

  16. A flower-like Ising model. Thermodynamic properties

    NASA Astrophysics Data System (ADS)

    Mejdani, R.; Ifti, M.

    1995-03-01

    We consider a flower-like Ising model, in which there are some additional bonds (in the “flower-core”) compared to a pure Ising chain. To understand the behaviour of this system and particularly the competition between ferromagnetic (usual) bonds along the chain and antiferromagnetic (additional) bonds across the chain, we study analytically and iteratively the main thermodynamic quantities. Very interesting is, in the zero-field and zero-temperature limit, the behaviour of the magnetization and the susceptibility, closely related to the ground state configurations and their degeneracies. This degeneracy explains the existence of non-zero entropy at zero temperature, in our results. Also, this model could be useful for the experimental investigations in studying the saturation curves for the enzyme kinetics or the melting curves for DNA-denaturation in some flower-like configurations.

  17. Modeling health survey data with excessive zero and K responses.

    PubMed

    Lin, Ting Hsiang; Tsai, Min-Hsiao

    2013-04-30

    Zero-inflated Poisson regression is a popular tool used to analyze data with excessive zeros. Although much work has already been performed to fit zero-inflated data, most models heavily depend on special features of the individual data. To be specific, this means that there is a sizable group of respondents who endorse the same answers making the data have peaks. In this paper, we propose a new model with the flexibility to model excessive counts other than zero, and the model is a mixture of multinomial logistic and Poisson regression, in which the multinomial logistic component models the occurrence of excessive counts, including zeros, K (where K is a positive integer) and all other values. The Poisson regression component models the counts that are assumed to follow a Poisson distribution. Two examples are provided to illustrate our models when the data have counts containing many ones and sixes. As a result, the zero-inflated and K-inflated models exhibit a better fit than the zero-inflated Poisson and standard Poisson regressions. Copyright © 2012 John Wiley & Sons, Ltd.

  18. The Application of Censored Regression Models in Low Streamflow Analyses

    NASA Astrophysics Data System (ADS)

    Kroll, C.; Luz, J.

    2003-12-01

    Estimation of low streamflow statistics at gauged and ungauged river sites is often a daunting task. This process is further confounded by the presence of intermittent streamflows, where streamflow is sometimes reported as zero, within a region. Streamflows recorded as zero may be zero, or may be less than the measurement detection limit. Such data is often referred to as censored data. Numerous methods have been developed to characterize intermittent streamflow series. Logit regression has been proposed to develop regional models of the probability annual lowflows series (such as 7-day lowflows) are zero. In addition, Tobit regression, a method of regression that allows for censored dependent variables, has been proposed for lowflow regional regression models in regions where the lowflow statistic of interest estimated as zero at some sites in the region. While these methods have been proposed, their use in practice has been limited. Here a delete-one jackknife simulation is presented to examine the performance of Logit and Tobit models of 7-day annual minimum flows in 6 USGS water resource regions in the United States. For the Logit model, an assessment is made of whether sites are correctly classified as having at least 10% of 7-day annual lowflows equal to zero. In such a situation, the 7-day, 10-year lowflow (Q710), a commonly employed low streamflow statistic, would be reported as zero. For the Tobit model, a comparison is made between results from the Tobit model, and from performing either ordinary least squares (OLS) or principal component regression (PCR) after the zero sites are dropped from the analysis. Initial results for the Logit model indicate this method to have a high probability of correctly classifying sites into groups with Q710s as zero and non-zero. Initial results also indicate the Tobit model produces better results than PCR and OLS when more than 5% of the sites in the region have Q710 values calculated as zero.

  19. Modeling zero-modified count and semicontinuous data in health services research Part 1: background and overview.

    PubMed

    Neelon, Brian; O'Malley, A James; Smith, Valerie A

    2016-11-30

    Health services data often contain a high proportion of zeros. In studies examining patient hospitalization rates, for instance, many patients will have no hospitalizations, resulting in a count of zero. When the number of zeros is greater or less than expected under a standard count model, the data are said to be zero modified relative to the standard model. A similar phenomenon arises with semicontinuous data, which are characterized by a spike at zero followed by a continuous distribution with positive support. When analyzing zero-modified count and semicontinuous data, flexible mixture distributions are often needed to accommodate both the excess zeros and the typically skewed distribution of nonzero values. Various models have been introduced over the past three decades to accommodate such data, including hurdle models, zero-inflated models, and two-part semicontinuous models. This tutorial describes recent modeling strategies for zero-modified count and semicontinuous data and highlights their role in health services research studies. Part 1 of the tutorial, presented here, provides a general overview of the topic. Part 2, appearing as a companion piece in this issue of Statistics in Medicine, discusses three case studies illustrating applications of the methods to health services research. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Structural zeroes and zero-inflated models.

    PubMed

    He, Hua; Tang, Wan; Wang, Wenjuan; Crits-Christoph, Paul

    2014-08-01

    In psychosocial and behavioral studies count outcomes recording the frequencies of the occurrence of some health or behavior outcomes (such as the number of unprotected sexual behaviors during a period of time) often contain a preponderance of zeroes because of the presence of 'structural zeroes' that occur when some subjects are not at risk for the behavior of interest. Unlike random zeroes (responses that can be greater than zero, but are zero due to sampling variability), structural zeroes are usually very different, both statistically and clinically. False interpretations of results and study findings may result if differences in the two types of zeroes are ignored. However, in practice, the status of the structural zeroes is often not observed and this latent nature complicates the data analysis. In this article, we focus on one model, the zero-inflated Poisson (ZIP) regression model that is commonly used to address zero-inflated data. We first give a brief overview of the issues of structural zeroes and the ZIP model. We then given an illustration of ZIP with data from a study on HIV-risk sexual behaviors among adolescent girls. Sample codes in SAS and Stata are also included to help perform and explain ZIP analyses.

  1. The effects of ion adsorption on the potential of zero charge and the differential capacitance of charged aqueous interfaces

    NASA Astrophysics Data System (ADS)

    Uematsu, Yuki; Netz, Roland R.; Bonthuis, Douwe Jan

    2018-02-01

    Using a box profile approximation for the non-electrostatic surface adsorption potentials of anions and cations, we calculate the differential capacitance of aqueous electrolyte interfaces from a numerical solution of the Poisson-Boltzmann equation, including steric interactions between the ions and an inhomogeneous dielectric profile. Preferential adsorption of the positive (negative) ion shifts the minimum of the differential capacitance to positive (negative) surface potential values. The trends are similar for the potential of zero charge; however, the potential of zero charge does not correspond to the minimum of the differential capacitance in the case of asymmetric ion adsorption, contrary to the assumption commonly used to determine the potential of zero charge. Our model can be used to obtain more accurate estimates of ion adsorption properties from differential capacitance or electrocapillary measurements. Asymmetric ion adsorption also affects the relative heights of the characteristic maxima in the differential capacitance curves as a function of the surface potential, but even for strong adsorption potentials the effect is small, making it difficult to reliably determine the adsorption properties from the peak heights.

  2. Thermalization threshold in models of 1D fermions

    NASA Astrophysics Data System (ADS)

    Mukerjee, Subroto; Modak, Ranjan; Ramswamy, Sriram

    2013-03-01

    The question of how isolated quantum systems thermalize is an interesting and open one. In this study we equate thermalization with non-integrability to try to answer this question. In particular, we study the effect of system size on the integrability of 1D systems of interacting fermions on a lattice. We find that for a finite-sized system, a non-zero value of an integrability breaking parameter is required to make an integrable system appear non-integrable. Using exact diagonalization and diagnostics such as energy level statistics and the Drude weight, we find that the threshold value of the integrability breaking parameter scales to zero as a power law with system size. We find the exponent to be the same for different models with its value depending on the random matrix ensemble describing the non-integrable system. We also study a simple analytical model of a non-integrable system with an integrable limit to better understand how a power law emerges.

  3. An observational study of emergency department utilization among enrollees of Minnesota Health Care Programs: financial and non-financial barriers have different associations.

    PubMed

    Shippee, Nathan D; Shippee, Tetyana P; Hess, Erik P; Beebe, Timothy J

    2014-02-08

    Emergency department (ED) use is costly, and especially frequent among publicly insured populations in the US, who also disproportionately encounter financial (cost/coverage-related) and non-financial/practical barriers to care. The present study examines the distinct associations financial and non-financial barriers to care have with patterns of ED use among a publicly insured population. This observational study uses linked administrative-survey data for enrollees of Minnesota Health Care Programs to examine patterns in ED use-specifically, enrollee self-report of the ED as usual source of care, and past-year count of 0, 1, or 2+ ED visits from administrative data. Main independent variables included a count of seven enrollee-reported financial concerns about healthcare costs and coverage, and a count of seven enrollee-reported non-financial, practical barriers to access (e.g., limited office hours, problems with childcare). Covariates included health, health care, and demographic measures. In multivariate regression models, only financial concerns were positively associated with reporting ED as usual source of care, but only non-financial barriers were significantly associated with greater ED visits. Regression-adjusted values indicated notable differences in ED visits by number of non-financial barriers: zero non-financial barriers meant an adjusted 78% chance of having zero ED visits (95% C.I.: 70.5%-85.5%), 15.9% chance of 1(95% C.I.: 10.4%-21.3%), and 6.2% chance (95% C.I.: 3.5%-8.8%) of 2+ visits, whereas having all seven non-financial barriers meant a 48.2% adjusted chance of zero visits (95% C.I.: 30.9%-65.6%), 31.8% chance of 1 visit (95% C.I.: 24.2%-39.5%), and 20% chance (95% C.I.: 8.4%-31.6%) of 2+ visits. Financial barriers were associated with identifying the ED as one's usual source of care but non-financial barriers were associated with actual ED visits. Outreach/literacy efforts may help reduce reliance on/perception of ED as usual source of care, whereas improved targeting/availability of covered services may help curb frequent actual visits, among publicly insured individuals.

  4. Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates

    USGS Publications Warehouse

    Gray, B.R.

    2005-01-01

    The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.

  5. Dimer geometry, amoebae and a vortex dimer model

    NASA Astrophysics Data System (ADS)

    Nash, Charles; O'Connor, Denjoe

    2017-09-01

    We present a geometrical approach and introduce a connection for dimer problems on bipartite and non-bipartite graphs. In the bipartite case the connection is flat but has non-trivial {Z}2 holonomy round certain curves. This holonomy has the universality property that it does not change as the number of vertices in the fundamental domain of the graph is increased. It is argued that the K-theory of the torus, with or without punctures, is the appropriate underlying invariant. In the non-bipartite case the connection has non-zero curvature as well as non-zero Chern number. The curvature does not require the introduction of a magnetic field. The phase diagram of these models is captured by what is known as an amoeba. We introduce a dimer model with negative edge weights which correspond to vortices. The amoebae for various models are studied with particular emphasis on the case of negative edge weights. Vortices give rise to new kinds of amoebae with certain singular structures which we investigate. On the amoeba of the vortex full hexagonal lattice we find the partition function corresponds to that of a massless Dirac doublet.

  6. Zero Power Non-Contact Suspension System with Permanent Magnet Motion Feedback

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Oka, Koichi

    This paper proposes a zero power control method for a permanent magnetic suspension system consisting mainly of a permanent magnet, an actuator, sensors, a suspended iron ball and a spring. A system using this zero power control method will consume quasi-zero power when the levitated object is suspended in an equilibrium state. To realize zero power control, a spring is installed in the magnetic suspension device to counterbalance the gravitational force on the actuator in the equilibrium position. In addition, an integral feedback loop in the controller affords zero actuator current when the device is in a balanced state. In this study, a model was set up for feasibility analysis, a prototype was manufactured for experimental confirmation, numerical simulations of zero power control with nonlinear attractive force were carried out based on the model, and experiments were completed to confirm the practicality of the prototype. The simulations and experiments were performed under varied conditions, such as without springs and without zero power control, with springs and without zero power control, with springs and with zero power control, using different springs and integral feedback gains. Some results are shown and analyzed in this paper. All results indicate that this zero power control method is feasible and effective for use in this suspension system with a permanent magnet motion feedback loop.

  7. Effective stochastic generator with site-dependent interactions

    NASA Astrophysics Data System (ADS)

    Khamehchi, Masoumeh; Jafarpour, Farhad H.

    2017-11-01

    It is known that the stochastic generators of effective processes associated with the unconditioned dynamics of rare events might consist of non-local interactions; however, it can be shown that there are special cases for which these generators can include local interactions. In this paper, we investigate this possibility by considering systems of classical particles moving on a one-dimensional lattice with open boundaries. The particles might have hard-core interactions similar to the particles in an exclusion process, or there can be many arbitrary particles at a single site in a zero-range process. Assuming that the interactions in the original process are local and site-independent, we will show that under certain constraints on the microscopic reaction rules, the stochastic generator of an unconditioned process can be local but site-dependent. As two examples, the asymmetric zero-temperature Glauber model and the A-model with diffusion are presented and studied under the above-mentioned constraints.

  8. Pre-equilibrium Longitudinal Flow in the IP-Glasma Framework for Pb+Pb Collisions at the LHC

    NASA Astrophysics Data System (ADS)

    McDonald, Scott; Shen, Chun; Fillion-Gourdeau, François; Jeon, Sangyong; Gale, Charles

    2017-08-01

    In this work, we debut a new implementation of IP-Glasma and quantify the pre-equilibrium longitudinal flow in the IP-Glasma framework. The saturation physics based IP-Glasma model naturally provides a non-zero initial longitudinal flow through its pre-equilibrium Yang-Mills evolution. A hybrid IP-Glasma+MUSIC+UrQMD frame-work is employed to test this new implementation against experimental data and to make further predictions about hadronic flow observables in Pb+Pb collisions at 5.02 TeV. Finally, the non-zero pre-equilibrium longitudinal flow of the IP-Glasma model is quantified, and its origin is briefly discussed.

  9. Marginalized multilevel hurdle and zero-inflated models for overdispersed and correlated count data with excess zeros.

    PubMed

    Kassahun, Wondwosen; Neyens, Thomas; Molenberghs, Geert; Faes, Christel; Verbeke, Geert

    2014-11-10

    Count data are collected repeatedly over time in many applications, such as biology, epidemiology, and public health. Such data are often characterized by the following three features. First, correlation due to the repeated measures is usually accounted for using subject-specific random effects, which are assumed to be normally distributed. Second, the sample variance may exceed the mean, and hence, the theoretical mean-variance relationship is violated, leading to overdispersion. This is usually allowed for based on a hierarchical approach, combining a Poisson model with gamma distributed random effects. Third, an excess of zeros beyond what standard count distributions can predict is often handled by either the hurdle or the zero-inflated model. A zero-inflated model assumes two processes as sources of zeros and combines a count distribution with a discrete point mass as a mixture, while the hurdle model separately handles zero observations and positive counts, where then a truncated-at-zero count distribution is used for the non-zero state. In practice, however, all these three features can appear simultaneously. Hence, a modeling framework that incorporates all three is necessary, and this presents challenges for the data analysis. Such models, when conditionally specified, will naturally have a subject-specific interpretation. However, adopting their purposefully modified marginalized versions leads to a direct marginal or population-averaged interpretation for parameter estimates of covariate effects, which is the primary interest in many applications. In this paper, we present a marginalized hurdle model and a marginalized zero-inflated model for correlated and overdispersed count data with excess zero observations and then illustrate these further with two case studies. The first dataset focuses on the Anopheles mosquito density around a hydroelectric dam, while adolescents' involvement in work, to earn money and support their families or themselves, is studied in the second example. Sub-models, which result from omitting zero-inflation and/or overdispersion features, are also considered for comparison's purpose. Analysis of the two datasets showed that accounting for the correlation, overdispersion, and excess zeros simultaneously resulted in a better fit to the data and, more importantly, that omission of any of them leads to incorrect marginal inference and erroneous conclusions about covariate effects. Copyright © 2014 John Wiley & Sons, Ltd.

  10. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less

  11. Nanomaterials application for heavy metals recovery from polluted water: The combination of nano zero-valent iron and carbon nanotubes. Competitive adsorption non-linear modeling.

    PubMed

    Vilardi, Giorgio; Mpouras, Thanasis; Dermatas, Dimitris; Verdone, Nicola; Polydera, Angeliki; Di Palma, Luca

    2018-06-01

    Carbon Nanotubes (CNTs) and nano Zero-Valent Iron (nZVI) particles, as well as two nanocomposites based on these novel nanomaterials, were employed as nano-adsorbents for the removal of hexavalent chromium, selenium and cobalt, from aqueous solutions. Nanomaterials characterization included the determination of their point of zero charge and particle size distribution. CNTs were further analyzed using scanning electron microscopy, thermogravimetric analysis and Raman spectroscopy to determine their morphology and structural properties. Batch experiments were carried out to investigate the removal efficiency and the possible competitive interactions among metal ions. Adsorption was found to be the main removal mechanism, except for Cr(VI) treatment by nZVI, where reduction was the predominant mechanism. The removal efficiency was estimated in decreasing order as CNTs-nZVI > nZVI > CNTs > CNTs-nZVI* independently upon the tested heavy metal. In the case of competitive adsorption, Cr(VI) exhibited the highest affinity for every adsorbent. The preferable Cr(VI) removal was also observed using binary systems of the tested metals by means of the CNTs-nZVI nanocomposite. Single species adsorption was better described by the non-linear Sips model, whilst competitive adsorption followed the modified Langmuir model. The CNTs-nZVI nanocomposite was tested for its reusability, and showed high adsorption efficiency (the q max values decreased less than 50% with respect to the first use) even after three cycles of use. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Multipartite interacting scalar dark matter in the light of updated LUX data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Subhaditya; Ghosh, Purusottam; Poulose, Poulose, E-mail: subhab@iitg.ernet.in, E-mail: p.ghosh@iitg.ernet.in, E-mail: poulose@iitg.ernet.in

    2017-04-01

    We explore constraints on multipartite dark matter (DM) framework composed of singlet scalar DM interacting with the Standard Model (SM) through Higgs portal coupling. We compute relic density and direct search constraints including the updated LUX bound for two component scenario with non-zero interactions between two DM components in Z{sub 2} × Z{sub 2}{sup '} framework in comparison with the one having O(2) symmetry. We point out availability of a significantly large region of parameter space of such a multipartite model with DM-DM interactions.

  13. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  14. Particle formation and ordering in strongly correlated fermionic systems: Solving a model of quantum chromodynamics

    DOE PAGES

    Azaria, P.; Konik, R. M.; Lecheminant, P.; ...

    2016-08-03

    In our paper we study a (1+1)-dimensional version of the famous Nambu–Jona-Lasinio model of quantum chromodynamics (QCD2) both at zero and at finite baryon density. We use nonperturbative techniques (non-Abelian bosonization and the truncated conformal spectrum approach). When the baryon chemical potential, μ, is zero, we describe the formation of fermion three-quark (nucleons and Δ baryons) and boson (two-quark mesons, six-quark deuterons) bound states. We also study at μ=0 the formation of a topologically nontrivial phase. When the chemical potential exceeds the critical value and a finite baryon density appears, the model has a rich phase diagram which includes phasesmore » with a density wave and superfluid quasi-long-range (QLR) order, as well as a phase of a baryon Tomonaga-Luttinger liquid (strange metal). Finally, the QLR order results in either a condensation of scalar mesons (the density wave) or six-quark bound states (deuterons).« less

  15. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    NASA Astrophysics Data System (ADS)

    Mukherjee, Suvodip; Das, Santanu; Joy, Minu; Souradeep, Tarun

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass meff for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation νt and change in spectral index for scalar perturbation νst to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of ns=0.96 by having a non-zero value of effective mass of the inflaton field m2eff/H2. The analysis with WP + Planck likelihood shows a non-zero detection of m2eff/H2 with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m2eff/H2 = -0.0237 ± 0.0135 which is consistent with zero.

  16. Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)

    NASA Astrophysics Data System (ADS)

    Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi

    2017-06-01

    Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.

  17. Compartment models of the diffusion MR signal in brain white matter: a taxonomy and comparison.

    PubMed

    Panagiotaki, Eleftheria; Schneider, Torben; Siow, Bernard; Hall, Matt G; Lythgoe, Mark F; Alexander, Daniel C

    2012-02-01

    This paper aims to identify the minimum requirements for an accurate model of the diffusion MR signal in white matter of the brain. We construct a taxonomy of multi-compartment models of white matter from combinations of simple models for the intra- and the extra-axonal spaces. We devise a new diffusion MRI protocol that provides measurements with a wide range of imaging parameters for diffusion sensitization both parallel and perpendicular to white matter fibres. We use the protocol to acquire data from two fixed rat brains, which allows us to fit, study and compare the different models. The study examines a total of 47 analytic models, including several well-used models from the literature, which we place within the taxonomy. The results show that models that incorporate intra-axonal restriction, such as ball and stick or CHARMED, generally explain the data better than those that do not, such as the DT or the biexponential models. However, three-compartment models which account for restriction parallel to the axons and incorporate pore size explain the measurements most accurately. The best fit comes from combining a full diffusion tensor (DT) model of the extra-axonal space with a cylindrical intra-axonal component of single radius and a third spherical compartment of non-zero radius. We also measure the stability of the non-zero radius intra-axonal models and find that single radius intra-axonal models are more stable than gamma distributed radii models with similar fitting performance. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Gauge Field Localization on Deformed Branes

    NASA Astrophysics Data System (ADS)

    Tofighi, A.; Moazzen, M.; Farokhtabar, A.

    2016-02-01

    In this paper, we utilise the Chumbes-Holf da Silva-Hott (CHH) mechanism to investigate the issue of gauge field localization on a deformed brane constructed with one scalar field, which can be coupled to gravity minimally or non-minimally. The study of deformed defects is important because they contain internal structures which may have implications in braneworld models. With the CHH mechanism, we find that the massless zero mode of gauge field, in the case of minimal or non-minimal coupling is localized on the brane. Moreover, in the case of non-minimal coupling, it is shown that, when the non-minimal coupling constant is larger than its critical value, then the zero mode is localized on each sub brane.

  19. Non-Linear Cosmological Power Spectra in Real and Redshift Space

    NASA Technical Reports Server (NTRS)

    Taylor, A. N.; Hamilton, A. J. S.

    1996-01-01

    We present an expression for the non-linear evolution of the cosmological power spectrum based on Lagrangian trajectories. This is simplified using the Zel'dovich approximation to trace particle displacements, assuming Gaussian initial conditions. The model is found to exhibit the transfer of power from large to small scales expected in self-gravitating fields. Some exact solutions are found for power-law initial spectra. We have extended this analysis into red-shift space and found a solution for the non-linear, anisotropic redshift-space power spectrum in the limit of plane-parallel redshift distortions. The quadrupole-to-monopole ratio is calculated for the case of power-law initial spectra. We find that the shape of this ratio depends on the shape of the initial spectrum, but when scaled to linear theory depends only weakly on the redshift-space distortion parameter, beta. The point of zero-crossing of the quadrupole, kappa(sub o), is found to obey a simple scaling relation and we calculate this scale in the Zel'dovich approximation. This model is found to be in good agreement with a series of N-body simulations on scales down to the zero-crossing of the quadrupole, although the wavenumber at zero-crossing is underestimated. These results are applied to the quadrupole-to-monopole ratio found in the merged QDOT plus 1.2-Jy-IRAS redshift survey. Using a likelihood technique we have estimated that the distortion parameter is constrained to be beta greater than 0.5 at the 95 percent level. Our results are fairly insensitive to the local primordial spectral slope, but the likelihood analysis suggests n = -2 un the translinear regime. The zero-crossing scale of the quadrupole is k(sub 0) = 0.5 +/- 0.1 h Mpc(exp -1) and from this we infer that the amplitude of clustering is sigma(sub 8) = 0.7 +/- 0.05. We suggest that the success of this model is due to non-linear redshift-space effects arising from infall on to caustic and is not dominated by virialized cluster cores. The latter should start to dominate on scales below the zero-crossing of the quadrupole, where our model breaks down.

  20. The running coupling of the minimal sextet composite Higgs model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fodor, Zoltan; Holland, Kieran; Kuti, Julius

    We compute the renormalized running coupling of SU(3) gauge theory coupled to N f = 2 flavors of massless Dirac fermions in the 2-index-symmetric (sextet) representation. This model is of particular interest as a minimal realization of the strongly interacting composite Higgs scenario. A recently proposed finite volume gradient flow scheme is used. The calculations are performed at several lattice spacings with two different implementations of the gradient flow allowing for a controlled continuum extrapolation and particular attention is paid to estimating the systematic uncertainties. For small values of the renormalized coupling our results for the β-function agree with perturbation theory. For moderate couplings we observe a downward deviation relative to the 2-loop β-function but in the coupling range where the continuum extrapolation is fully under control we do not observe an infrared fixed point. The explored range includes the locations of the zero of the 3-loop and the 4-loop β-functions in themore » $$\\overline{MS}$$ scheme. The absence of a non-trivial zero in the β-function in the explored range of the coupling is consistent with our earlier findings based on hadronic observables, the chiral condensate and the GMOR relation. The present work is the first to report continuum non-perturbative results for the sextet model.« less

  1. Direct model-based predictive control scheme without cost function for voltage source inverters with reduced common-mode voltage

    NASA Astrophysics Data System (ADS)

    Kim, Jae-Chang; Moon, Sung-Ki; Kwak, Sangshin

    2018-04-01

    This paper presents a direct model-based predictive control scheme for voltage source inverters (VSIs) with reduced common-mode voltages (CMVs). The developed method directly finds optimal vectors without using repetitive calculation of a cost function. To adjust output currents with the CMVs in the range of -Vdc/6 to +Vdc/6, the developed method uses voltage vectors, as finite control resources, excluding zero voltage vectors which produce the CMVs in the VSI within ±Vdc/2. In a model-based predictive control (MPC), not using zero voltage vectors increases the output current ripples and the current errors. To alleviate these problems, the developed method uses two non-zero voltage vectors in one sampling step. In addition, the voltage vectors scheduled to be used are directly selected at every sampling step once the developed method calculates the future reference voltage vector, saving the efforts of repeatedly calculating the cost function. And the two non-zero voltage vectors are optimally allocated to make the output current approach the reference current as close as possible. Thus, low CMV, rapid current-following capability and sufficient output current ripple performance are attained by the developed method. The results of a simulation and an experiment verify the effectiveness of the developed method.

  2. Functional Linear Model with Zero-value Coefficient Function at Sub-regions.

    PubMed

    Zhou, Jianhui; Wang, Nae-Yuh; Wang, Naisyin

    2013-01-01

    We propose a shrinkage method to estimate the coefficient function in a functional linear regression model when the value of the coefficient function is zero within certain sub-regions. Besides identifying the null region in which the coefficient function is zero, we also aim to perform estimation and inferences for the nonparametrically estimated coefficient function without over-shrinking the values. Our proposal consists of two stages. In stage one, the Dantzig selector is employed to provide initial location of the null region. In stage two, we propose a group SCAD approach to refine the estimated location of the null region and to provide the estimation and inference procedures for the coefficient function. Our considerations have certain advantages in this functional setup. One goal is to reduce the number of parameters employed in the model. With a one-stage procedure, it is needed to use a large number of knots in order to precisely identify the zero-coefficient region; however, the variation and estimation difficulties increase with the number of parameters. Owing to the additional refinement stage, we avoid this necessity and our estimator achieves superior numerical performance in practice. We show that our estimator enjoys the Oracle property; it identifies the null region with probability tending to 1, and it achieves the same asymptotic normality for the estimated coefficient function on the non-null region as the functional linear model estimator when the non-null region is known. Numerically, our refined estimator overcomes the shortcomings of the initial Dantzig estimator which tends to under-estimate the absolute scale of non-zero coefficients. The performance of the proposed method is illustrated in simulation studies. We apply the method in an analysis of data collected by the Johns Hopkins Precursors Study, where the primary interests are in estimating the strength of association between body mass index in midlife and the quality of life in physical functioning at old age, and in identifying the effective age ranges where such associations exist.

  3. Unsteady free surface flow in porous media: One-dimensional model equations including vertical effects and seepage face

    NASA Astrophysics Data System (ADS)

    Di Nucci, Carmine

    2018-05-01

    This note examines the two-dimensional unsteady isothermal free surface flow of an incompressible fluid in a non-deformable, homogeneous, isotropic, and saturated porous medium (with zero recharge and neglecting capillary effects). Coupling a Boussinesq-type model for nonlinear water waves with Darcy's law, the two-dimensional flow problem is solved using one-dimensional model equations including vertical effects and seepage face. In order to take into account the seepage face development, the system equations (given by the continuity and momentum equations) are completed by an integral relation (deduced from the Cauchy theorem). After testing the model against data sets available in the literature, some numerical simulations, concerning the unsteady flow through a rectangular dam (with an impermeable horizontal bottom), are presented and discussed.

  4. Bayesian hierarchical modelling of continuous non-negative longitudinal data with a spike at zero: An application to a study of birds visiting gardens in winter.

    PubMed

    Swallow, Ben; Buckland, Stephen T; King, Ruth; Toms, Mike P

    2016-03-01

    The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero-inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean-variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase. © 2015 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Diurnal forcing of planetary atmospheres

    NASA Technical Reports Server (NTRS)

    Houben, Howard C.

    1991-01-01

    A free convection parameterization has been introduced into the Mars Planetary Boundary Layer Model (MPBL). Previously, the model would fail to generate turbulence under conditions of zero wind shear, even when statically unstable. This in turn resulted in erroneous results at the equator, for example, when the lack of Coriolis forcing allowed zero wind conditions. The underlying cause of these failures was the level 2 second-order turbulence closure scheme which derived diffusivities as algebraic functions of the Richardson number (the ratio of static stability to wind shear). In the previous formulation, the diffusivities were scaled by the wind shear--a convenient parameter since it is non-negative. This was the drawback that all diffusivities are zero under conditions of zero shear (viz., the free convection case). The new scheme tests for the condition of zero shear in conjunction with static instability and recalculates the diffusivities using a static stability scaling. The results for a simulation of the equatorial boundary layer at autumnal equinox are presented. (Note that after some wind shear is generated, the model reverts to the traditional diffusivity calculation.)

  6. QUT Para at TREC 2012 Web Track: Word Associations for Retrieving Web Documents

    DTIC Science & Technology

    2012-11-01

    zero for the QUTParaTQEg1 sys- tem (and the best performance across all participants was non-zero), included: 1. Topic 157: The beatles rock band 2...Topic 162: dnr 3. Topic 163: arkansas 5 4. Topic 167: barbados 5. Topic 170: scooters 6. Topic 179: black history 7. Topic 188: internet phone service

  7. Bounded fractional diffusion in geological media: Definition and Lagrangian approximation

    USGS Publications Warehouse

    Zhang, Yong; Green, Christopher T.; LaBolle, Eric M.; Neupauer, Roseanna M.; Sun, HongGuang

    2016-01-01

    Spatiotemporal Fractional-Derivative Models (FDMs) have been increasingly used to simulate non-Fickian diffusion, but methods have not been available to define boundary conditions for FDMs in bounded domains. This study defines boundary conditions and then develops a Lagrangian solver to approximate bounded, one-dimensional fractional diffusion. Both the zero-value and non-zero-value Dirichlet, Neumann, and mixed Robin boundary conditions are defined, where the sign of Riemann-Liouville fractional derivative (capturing non-zero-value spatial-nonlocal boundary conditions with directional super-diffusion) remains consistent with the sign of the fractional-diffusive flux term in the FDMs. New Lagrangian schemes are then proposed to track solute particles moving in bounded domains, where the solutions are checked against analytical or Eularian solutions available for simplified FDMs. Numerical experiments show that the particle-tracking algorithm for non-Fickian diffusion differs from Fickian diffusion in relocating the particle position around the reflective boundary, likely due to the non-local and non-symmetric fractional diffusion. For a non-zero-value Neumann or Robin boundary, a source cell with a reflective face can be applied to define the release rate of random-walking particles at the specified flux boundary. Mathematical definitions of physically meaningful nonlocal boundaries combined with bounded Lagrangian solvers in this study may provide the only viable techniques at present to quantify the impact of boundaries on anomalous diffusion, expanding the applicability of FDMs from infinite do mains to those with any size and boundary conditions.

  8. Coupled Brownian motors: Anomalous hysteresis and zero-bias negative conductance

    NASA Astrophysics Data System (ADS)

    Reimann, P.; Kawai, R.; Van den Broeck, C.; Hänggi, P.

    1999-03-01

    We introduce a model of interacting Brownian particles in a symmetric, periodic potential that undergoes a noise-induced non-equilibrium phase transition. The associated spontaneous symmetry breaking entails a ratchet-like transport mechanism. In response to an external force we identify several novel features; among the most prominent being a zero-bias negative conductance and a prima facie counterintuitive, anomalous hysteresis.

  9. Racial/Ethnic Disparities in Meeting 5-2-1-0 Recommendations among Children and Adolescents in the United States.

    PubMed

    Haughton, Christina F; Wang, Monica L; Lemon, Stephenie C

    2016-08-01

    To evaluate racial/ethnic disparities among children and adolescents in meeting the 4 daily 5-2-1-0 nutrition and activity targets in a nationally representative sample. The 5-2-1-0 message summarizes 4 target daily behaviors for obesity prevention: consuming ≥5 servings of fruit and vegetables, engaging in ≤2 hours of screen time, engaging in ≥1 hour of physical activity, and consuming 0 sugar-sweetened beverages daily. The National Health and Nutrition Examination Survey (2011-2012) data were used. The study sample included Hispanic (n = 608), non-Hispanic black (n = 609), Asian (n = 253), and non-Hispanic white (n = 484) youth 6-19 years old. The 5-2-1-0 targets were assessed using 24-hour dietary recalls, the Global Physical Activity Questionnaire, and sedentary behavior items. Outcomes included meeting all targets, no targets, and individual targets. Multivariable logistic regression models accounting for the complex sampling design were used to evaluate the association of race/ethnicity with each outcome among children and adolescents separately. None of the adolescents and <1% of children met all 4 of the 5-2-1-0 targets, and 19% and 33%, of children and adolescents, respectively, met zero targets. No racial/ethnic differences in meeting zero targets were observed among children. Hispanic (aOR, 1.76 [95% CI, 1.04-2.98]), non-Hispanic black (aOR, 1.82 [95% CI, 1.04-3.17]), and Asian (aOR, 1.48 [95% CI, 1.08-2.04]) adolescents had greater odds of meeting zero targets compared with non-Hispanic whites. Racial/ethnic differences in meeting individual targets were observed among children and adolescents. Despite national initiatives, youth in the US are far from meeting 5-2-1-0 targets. Racial/ethnic disparities exist, particularly among adolescents. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Perceptual disturbances predicted in zero-g through three-dimensional modeling.

    PubMed

    Holly, Jan E

    2003-01-01

    Perceptual disturbances in zero-g and 1-g differ. For example, the vestibular coriolis (or "cross-coupled") effect is weaker in zero-g. In 1-g, blindfolded subjects rotating on-axis experience perceptual disturbances upon head tilt, but the effects diminish in zero-g. Head tilts during centrifugation in zero-g and 1-g are investigated here by means of three-dimensional modeling, using a model that was previously used to explain the zero-g reduction of the on-axis vestibular coriolis effect. The model's foundation comprises the laws of physics, including linear-angular interactions in three dimensions. Addressed is the question: In zero-g, will the vestibular coriolis effect be as weak during centrifugation as during on-axis rotation? Centrifugation in 1-g was simulated first, with the subject supine, head toward center. The most noticeable result concerned direction of head yaw. For clockwise centrifuge rotation, greater perceptual effects arose in simulations during yaw counterclockwise (as viewed from the top of the head) than for yaw clockwise. Centrifugation in zero-g was then simulated with the same "supine" orientation. The result: In zero-g the simulated vestibular coriolis effect was greater during centrifugation than during on-axis rotation. In addition, clockwise-counterclockwise differences did not appear in zero-g, in contrast to the differences that appear in 1-g.

  11. Gamification for Non-Majors Mathematics: An Innovative Assignment Model

    ERIC Educational Resources Information Center

    Leong, Siow Hoo; Tang, Howe Eng

    2017-01-01

    The most important ingredient of the pedagogy for teaching non-majors is getting their engagement. This paper proposes to use gamification to engage non-majors. An innovative game termed as Cover the Hungarian's Zeros is designed to tackle the common weakness of non-majors mathematics in solving the assignment problem using the Hungarian Method.…

  12. An observational study of emergency department utilization among enrollees of Minnesota Health Care Programs: financial and non-financial barriers have different associations

    PubMed Central

    2014-01-01

    Background Emergency department (ED) use is costly, and especially frequent among publicly insured populations in the US, who also disproportionately encounter financial (cost/coverage-related) and non-financial/practical barriers to care. The present study examines the distinct associations financial and non-financial barriers to care have with patterns of ED use among a publicly insured population. Methods This observational study uses linked administrative-survey data for enrollees of Minnesota Health Care Programs to examine patterns in ED use—specifically, enrollee self-report of the ED as usual source of care, and past-year count of 0, 1, or 2+ ED visits from administrative data. Main independent variables included a count of seven enrollee-reported financial concerns about healthcare costs and coverage, and a count of seven enrollee-reported non-financial, practical barriers to access (e.g., limited office hours, problems with childcare). Covariates included health, health care, and demographic measures. Results In multivariate regression models, only financial concerns were positively associated with reporting ED as usual source of care, but only non-financial barriers were significantly associated with greater ED visits. Regression-adjusted values indicated notable differences in ED visits by number of non-financial barriers: zero non-financial barriers meant an adjusted 78% chance of having zero ED visits (95% C.I.: 70.5%-85.5%), 15.9% chance of 1(95% C.I.: 10.4%-21.3%), and 6.2% chance (95% C.I.: 3.5%-8.8%) of 2+ visits, whereas having all seven non-financial barriers meant a 48.2% adjusted chance of zero visits (95% C.I.: 30.9%-65.6%), 31.8% chance of 1 visit (95% C.I.: 24.2%-39.5%), and 20% chance (95% C.I.: 8.4%-31.6%) of 2+ visits. Conclusions Financial barriers were associated with identifying the ED as one’s usual source of care but non-financial barriers were associated with actual ED visits. Outreach/literacy efforts may help reduce reliance on/perception of ED as usual source of care, whereas improved targeting/availability of covered services may help curb frequent actual visits, among publicly insured individuals. PMID:24507761

  13. Adaptive control based on retrospective cost optimization

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S. (Inventor); Santillo, Mario A. (Inventor)

    2012-01-01

    A discrete-time adaptive control law for stabilization, command following, and disturbance rejection that is effective for systems that are unstable, MIMO, and/or nonminimum phase. The adaptive control algorithm includes guidelines concerning the modeling information needed for implementation. This information includes the relative degree, the first nonzero Markov parameter, and the nonminimum-phase zeros. Except when the plant has nonminimum-phase zeros whose absolute value is less than the plant's spectral radius, the required zero information can be approximated by a sufficient number of Markov parameters. No additional information about the poles or zeros need be known. Numerical examples are presented to illustrate the algorithm's effectiveness in handling systems with errors in the required modeling data, unknown latency, sensor noise, and saturation.

  14. Functional linear models for zero-inflated count data with application to modeling hospitalizations in patients on dialysis.

    PubMed

    Sentürk, Damla; Dalrymple, Lorien S; Nguyen, Danh V

    2014-11-30

    We propose functional linear models for zero-inflated count data with a focus on the functional hurdle and functional zero-inflated Poisson (ZIP) models. Although the hurdle model assumes the counts come from a mixture of a degenerate distribution at zero and a zero-truncated Poisson distribution, the ZIP model considers a mixture of a degenerate distribution at zero and a standard Poisson distribution. We extend the generalized functional linear model framework with a functional predictor and multiple cross-sectional predictors to model counts generated by a mixture distribution. We propose an estimation procedure for functional hurdle and ZIP models, called penalized reconstruction, geared towards error-prone and sparsely observed longitudinal functional predictors. The approach relies on dimension reduction and pooling of information across subjects involving basis expansions and penalized maximum likelihood techniques. The developed functional hurdle model is applied to modeling hospitalizations within the first 2 years from initiation of dialysis, with a high percentage of zeros, in the Comprehensive Dialysis Study participants. Hospitalization counts are modeled as a function of sparse longitudinal measurements of serum albumin concentrations, patient demographics, and comorbidities. Simulation studies are used to study finite sample properties of the proposed method and include comparisons with an adaptation of standard principal components regression. Copyright © 2014 John Wiley & Sons, Ltd.

  15. Majorana zero modes in superconductor-semiconductor heterostructures

    NASA Astrophysics Data System (ADS)

    Lutchyn, R. M.; Bakkers, E. P. A. M.; Kouwenhoven, L. P.; Krogstrup, P.; Marcus, C. M.; Oreg, Y.

    2018-05-01

    Realizing topological superconductivity and Majorana zero modes in the laboratory is a major goal in condensed-matter physics. In this Review, we survey the current status of this rapidly developing field, focusing on proposals for the realization of topological superconductivity in semiconductor-superconductor heterostructures. We examine materials science progress in growing InAs and InSb semiconductor nanowires and characterizing these systems. We then discuss the observation of robust signatures of Majorana zero modes in recent experiments, paying particular attention to zero-bias tunnelling conduction measurements and Coulomb blockade experiments. We also outline several next-generation experiments probing exotic properties of Majorana zero modes, including fusion rules and non-Abelian exchange statistics. Finally, we discuss prospects for implementing Majorana-based topological quantum computation.

  16. Disease Spreading Model with Partial Isolation

    NASA Astrophysics Data System (ADS)

    Chakraborty, Abhijit; Manna, S. S.

    2013-08-01

    The effect of partial isolation has been studied in disease spreading processes using the framework of susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models. The partial isolation is introduced by imposing a restriction: each infected individual can probabilistically infect up to a maximum number n of his susceptible neighbors, but not all. It has been observed that the critical values of the spreading rates for endemic states are non-zero in both models and decrease as 1/n with n, on all graphs including scale-free graphs. In particular, the SIR model with n = 2 turned out to be a special case, characterized by a new bond percolation threshold on square lattice.

  17. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    PubMed

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  18. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  19. A Three-Parameter Model for Predicting Fatigue Life of Ductile Metals Under Constant Amplitude Multiaxial Loading

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Li, Jing; Zhang, Zhong-ping

    2013-04-01

    In this article, a fatigue damage parameter is proposed to assess the multiaxial fatigue lives of ductile metals based on the critical plane concept: Fatigue crack initiation is controlled by the maximum shear strain, and the other important effect in the fatigue damage process is the normal strain and stress. This fatigue damage parameter introduces a stress-correlated factor, which describes the degree of the non-proportional cyclic hardening. Besides, a three-parameter multiaxial fatigue criterion is used to correlate the fatigue lifetime of metallic materials with the proposed damage parameter. Under the uniaxial loading, this three-parameter model reduces to the recently developed Zhang's model for predicting the uniaxial fatigue crack initiation life. The accuracy and reliability of this three-parameter model are checked against the experimental data found in literature through testing six different ductile metals under various strain paths with zero/non-zero mean stress.

  20. Vehicle to wireless power transfer coupling coil alignment sensor

    DOEpatents

    Miller, John M.; Chambon, Paul H.; Jones, Perry T.; White, Clifford P.

    2016-02-16

    A non-contacting position sensing apparatus includes at least one vehicle-mounted receiver coil that is configured to detect a net flux null when the vehicle is optimally aligned relative to the primary coil in the charging device. Each of the at least one vehicle-mounted receiver coil includes a clockwise winding loop and a counterclockwise winding loop that are substantially symmetrically configured and serially connected to each other. When the non-contacting position sensing apparatus is located directly above the primary coil of the charging device, the electromotive forces from the clockwise winding loop and the counterclockwise region cancel out to provide a zero electromotive force, i.e., a zero voltage reading across the coil that includes the clockwise winding loop and the counterclockwise winding loop.

  1. 77 FR 21529 - Freshwater Crawfish Tail Meat From the People's Republic of China: Final Results of Antidumping...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-10

    ... question, including when that rate is zero or de minimis.\\5\\ In this case, there is only one non-selected... calculations for one company. Therefore, the final results differ from the preliminary results. The final... not to calculate an all-others rate using any zero or de minimis margins or any margins based entirely...

  2. Estimation of inflation parameters for Perturbed Power Law model using recent CMB measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, Suvodip; Das, Santanu; Souradeep, Tarun

    2015-01-01

    Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass m{sub eff} for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independentmore » parameters, namely spectral index for tensor perturbation ν{sub t} and change in spectral index for scalar perturbation ν{sub st} to explain the observed features in the scalar and tensor power spectrum of perturbation. From the recent measurements of CMB power spectra by WMAP, Planck and BICEP-2 for temperature and polarization, we estimate the feasibility of PPL model with standard ΛCDM model. Although BICEP-2 claimed a detection of r=0.2, estimates of dust contamination provided by Planck have left open the possibility that only upper bound on r will be expected in a joint analysis. As a result we consider different upper bounds on the value of r and show that PPL model can explain a lower value of tensor to scalar ratio (r<0.1 or r<0.01) for a scalar spectral index of n{sub s}=0.96 by having a non-zero value of effective mass of the inflaton field m{sup 2}{sub eff}/H{sup 2}. The analysis with WP + Planck likelihood shows a non-zero detection of m{sup 2}{sub eff}/H{sup 2} with 5.7 σ and 8.1 σ respectively for r<0.1 and r<0.01. Whereas, with BICEP-2 likelihood m{sup 2}{sub eff}/H{sup 2} = −0.0237 ± 0.0135 which is consistent with zero.« less

  3. Incorporating nuclear vibrational energies into the "atom in molecules" analysis: An analytical study

    NASA Astrophysics Data System (ADS)

    Gharabaghi, Masumeh; Shahbazian, Shant

    2017-04-01

    The quantum theory of atoms in molecules (QTAIM) is based on the clamped nucleus paradigm and solely working with the electronic wavefunctions, so does not include nuclear vibrations in the AIM analysis. On the other hand, the recently extended version of the QTAIM, called the multi-component QTAIM (MC-QTAIM), incorporates both electrons and quantum nuclei, i.e., those nuclei treated as quantum waves instead of clamped point charges, into the AIM analysis using non-adiabatic wavefunctions. Thus, the MC-QTAIM is the natural framework to incorporate the role of nuclear vibrations into the AIM analysis. In this study, within the context of the MC-QTAIM, the formalism of including nuclear vibrational energy in the atomic basin energy is developed in detail and its contribution is derived analytically using the recently proposed non-adiabatic Hartree product nuclear wavefunction. It is demonstrated that within the context of this wavefunction, the quantum nuclei may be conceived pseudo-adiabatically as quantum oscillators and both isotropic harmonic and anisotropic anharmonic oscillator models are used to compute the zero-point nuclear vibrational energy contribution to the basin energies explicitly. Inspired by the results gained within the context of the MC-QTAIM analysis, a heuristic approach is proposed within the context of the QTAIM to include nuclear vibrational energy in the basin energy from the vibrational wavefunction derived adiabatically. The explicit calculation of the basin contribution of the zero-point vibrational energy using the uncoupled harmonic oscillator model leads to results consistent with those derived from the MC-QTAIM.

  4. Incorporating nuclear vibrational energies into the "atom in molecules" analysis: An analytical study.

    PubMed

    Gharabaghi, Masumeh; Shahbazian, Shant

    2017-04-21

    The quantum theory of atoms in molecules (QTAIM) is based on the clamped nucleus paradigm and solely working with the electronic wavefunctions, so does not include nuclear vibrations in the AIM analysis. On the other hand, the recently extended version of the QTAIM, called the multi-component QTAIM (MC-QTAIM), incorporates both electrons and quantum nuclei, i.e., those nuclei treated as quantum waves instead of clamped point charges, into the AIM analysis using non-adiabatic wavefunctions. Thus, the MC-QTAIM is the natural framework to incorporate the role of nuclear vibrations into the AIM analysis. In this study, within the context of the MC-QTAIM, the formalism of including nuclear vibrational energy in the atomic basin energy is developed in detail and its contribution is derived analytically using the recently proposed non-adiabatic Hartree product nuclear wavefunction. It is demonstrated that within the context of this wavefunction, the quantum nuclei may be conceived pseudo-adiabatically as quantum oscillators and both isotropic harmonic and anisotropic anharmonic oscillator models are used to compute the zero-point nuclear vibrational energy contribution to the basin energies explicitly. Inspired by the results gained within the context of the MC-QTAIM analysis, a heuristic approach is proposed within the context of the QTAIM to include nuclear vibrational energy in the basin energy from the vibrational wavefunction derived adiabatically. The explicit calculation of the basin contribution of the zero-point vibrational energy using the uncoupled harmonic oscillator model leads to results consistent with those derived from the MC-QTAIM.

  5. Generalized characteristic ratios assignment for commensurate fractional order systems with one zero.

    PubMed

    Tabatabaei, Mohammad

    2017-07-01

    In this paper, a new method for determination of the desired characteristic equation and zero location of commensurate fractional order systems is presented. The concept of the characteristic ratio is extended for zero-including commensurate fractional order systems. The generalized version of characteristic ratios is defined such that the time-scaling property of characteristic ratios is also preserved. The monotonicity of the magnitude frequency response is employed to assign the generalized characteristic ratios for commensurate fractional order transfer functions with one zero. A simple pattern for characteristic ratios is proposed to reach a non-overshooting step response. Then, the proposed pattern is revisited to reach a low overshoot (say for example 2%) step response. Finally, zero-including controllers such as fractional order PI or lag (lead) controllers are designed using generalized characteristic ratios assignment method. Numerical simulations are provided to show the efficiency of the so designed controllers. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. An Investigation of High-Cycle Fatigue Models for Metallic Structures Exhibiting Snap-Through Response

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.; Sweitzer, Karl A.

    2007-01-01

    A study is undertaken to develop a methodology for determining the suitability of various high-cycle fatigue models for metallic structures subjected to combined thermal-acoustic loadings. Two features of this problem differentiate it from the fatigue of structures subject to acoustic loading alone. Potentially large mean stresses associated with the thermally pre- and post-buckled states require models capable of handling those conditions. Snap-through motion between multiple post-buckled equilibrium positions introduces very high alternating stress. The thermal-acoustic time history response of a clamped aluminum beam structure with geometric and material nonlinearities is determined via numerical simulation. A cumulative damage model is employed using a rainflow cycle counting scheme and fatigue estimates are made for 2024-T3 aluminum using various non-zero mean fatigue models, including Walker, Morrow, Morrow with true fracture strength, and MMPDS. A baseline zero-mean model is additionally considered. It is shown that for this material, the Walker model produces the most conservative fatigue estimates when the stress response has a tensile mean introduced by geometric nonlinearity, but remains in the linear elastic range. However, when the loading level is sufficiently high to produce plasticity, the response becomes more fully reversed and the baseline, Morrow, and Morrow with true fracture strength models produce the most conservative fatigue estimates.

  7. New boundary conditions for fluid interaction with hydrophobic surface

    NASA Astrophysics Data System (ADS)

    Pochylý, František; Fialová, Simona; Havlásek, Michal

    2018-06-01

    Solution of both laminar and turbulent flow with consideration of hydrophobic surface is based on the original Navier assumption that the shear stress on the hydrophobic surface is directly proportional to the slipping velocity. In the previous work a laminar flow analysis with different boundary conditions was performed. The shear stress value on the tube walls directly depends on the pressure gradient. In the solution of the turbulent flow by the k-ɛ model, the occurrence of the fluctuation components of velocity on the hydrophobic surface is considered. The fluctuation components of the velocity affect the size of the adhesive forces. We assume that the boundary condition for ɛ depending on the velocity gradients will not need to be changed. When the liquid slips over the surface, non-zero fluctuation velocity components occur in the turbulent flow. These determine the non-zero value of the turbulent kinetic energy K. In addition, the fluctuation velocity components also influence the value of the adhesive forces, so it is necessary to include these in the formulation of new boundary conditions for turbulent flow on the hydrophobic surface.

  8. Ergodicity, configurational entropy and free energy in pigment solutions and plant photosystems: influence of excited state lifetime.

    PubMed

    Jennings, Robert C; Zucchelli, Giuseppe

    2014-01-01

    We examine ergodicity and configurational entropy for a dilute pigment solution and for a suspension of plant photosystem particles in which both ground and excited state pigments are present. It is concluded that the pigment solution, due to the extreme brevity of the excited state lifetime, is non-ergodic and the configurational entropy approaches zero. Conversely, due to the rapid energy transfer among pigments, each photosystem is ergodic and the configurational entropy is positive. This decreases the free energy of the single photosystem pigment array by a small amount. On the other hand, the suspension of photosystems is non-ergodic and the configurational entropy approaches zero. The overall configurational entropy which, in principle, includes contributions from both the single excited photosystems and the suspension which contains excited photosystems, also approaches zero. Thus the configurational entropy upon photon absorption by either a pigment solution or a suspension of photosystem particles is approximately zero. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Army Net Zero Prove Out. Net Zero Waster Best Practices

    DTIC Science & Technology

    2014-11-18

    harvested for non-potable uses such as toilet flushing in buildings, cooling towers and irrigation. Implementation of these solutions should not...potable uses such as toilet flushing in buildings, cooling towers and irrigation. Implementation of these solutions should not increase overall...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources

  10. Gradient corrections to the exchange-correlation free energy

    DOE PAGES

    Sjostrom, Travis; Daligault, Jerome

    2014-10-07

    We develop the first-order gradient correction to the exchange-correlation free energy of the homogeneous electron gas for use in finite-temperature density functional calculations. Based on this, we propose and implement a simple temperature-dependent extension for functionals beyond the local density approximation. These finite-temperature functionals show improvement over zero-temperature functionals, as compared to path-integral Monte Carlo calculations for deuterium equations of state, and perform without computational cost increase compared to zero-temperature functionals and so should be used for finite-temperature calculations. Furthermore, while the present functionals are valid at all temperatures including zero, non-negligible difference with zero-temperature functionals begins at temperatures abovemore » 10 000 K.« less

  11. A unified wall function for compressible turbulence modelling

    NASA Astrophysics Data System (ADS)

    Ong, K. C.; Chan, A.

    2018-05-01

    Turbulence modelling near the wall often requires a high mesh density clustered around the wall and the first cells adjacent to the wall to be placed in the viscous sublayer. As a result, the numerical stability is constrained by the smallest cell size and hence requires high computational overhead. In the present study, a unified wall function is developed which is valid for viscous sublayer, buffer sublayer and inertial sublayer, as well as including effects of compressibility, heat transfer and pressure gradient. The resulting wall function applies to compressible turbulence modelling for both isothermal and adiabatic wall boundary conditions with the non-zero pressure gradient. Two simple wall function algorithms are implemented for practical computation of isothermal and adiabatic wall boundary conditions. The numerical results show that the wall function evaluates the wall shear stress and turbulent quantities of wall adjacent cells at wide range of non-dimensional wall distance and alleviate the number and size of cells required.

  12. The role of the global magnetic field and thermal conduction on the structure of the accretion disks of all models

    NASA Astrophysics Data System (ADS)

    Farahinezhad, M.; Khesali, A. R.

    2018-05-01

    In this paper, the effects of global magnetic field and thermal conduction on the vertical structure of the accretion disks has been investigated. In this study, four types disks were examined: Gas pressure dominated the standard disk, while radiation pressure dominated the standard disk, ADAF disk, slim disk. Moreover, the general shape of the magnetic field, including toroidal and poloidal components, is considered. The magnetohydrodynamic equations were solved in spherical coordinates using self-similar assumptions in the radial direction. Following previous authors, the polar velocity vθ is non-zero and Trφ was considered as a dominant component of the stress tensor. The results show that the disk becomes thicker compared to the non-magnetic fields. It has also been shown that the presence of the thermal conduction in the ADAF model makes the disk thicker; the disk is expanded in the standard model.

  13. Evaluation of the Use of Zero-Augmented Regression Techniques to Model Incidence of Campylobacter Infections in FoodNet.

    PubMed

    Tremblay, Marlène; Crim, Stacy M; Cole, Dana J; Hoekstra, Robert M; Henao, Olga L; Döpfer, Dörte

    2017-10-01

    The Foodborne Diseases Active Surveillance Network (FoodNet) is currently using a negative binomial (NB) regression model to estimate temporal changes in the incidence of Campylobacter infection. FoodNet active surveillance in 483 counties collected data on 40,212 Campylobacter cases between years 2004 and 2011. We explored models that disaggregated these data to allow us to account for demographic, geographic, and seasonal factors when examining changes in incidence of Campylobacter infection. We hypothesized that modeling structural zeros and including demographic variables would increase the fit of FoodNet's Campylobacter incidence regression models. Five different models were compared: NB without demographic covariates, NB with demographic covariates, hurdle NB with covariates in the count component only, hurdle NB with covariates in both zero and count components, and zero-inflated NB with covariates in the count component only. Of the models evaluated, the nonzero-augmented NB model with demographic variables provided the best fit. Results suggest that even though zero inflation was not present at this level, individualizing the level of aggregation and using different model structures and predictors per site might be required to correctly distinguish between structural and observational zeros and account for risk factors that vary geographically.

  14. On performance of parametric and distribution-free models for zero-inflated and over-dispersed count responses.

    PubMed

    Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M

    2015-10-30

    Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Global changes in gene expression, assayed by microarray hybridization and quantitative RT-PCR, during acclimation of three Arabidopsis thaliana accessions to sub-zero temperatures after cold acclimation.

    PubMed

    Le, Mai Q; Pagter, Majken; Hincha, Dirk K

    2015-01-01

    During cold acclimation plants increase in freezing tolerance in response to low non-freezing temperatures. This is accompanied by many physiological, biochemical and molecular changes that have been extensively investigated. In addition, plants of many species, including Arabidopsis thaliana, become more freezing tolerant during exposure to mild, non-damaging sub-zero temperatures after cold acclimation. There is hardly any information available about the molecular basis of this adaptation. Here, we have used microarrays and a qRT-PCR primer platform covering 1,880 genes encoding transcription factors (TFs) to monitor changes in gene expression in the Arabidopsis accessions Columbia-0, Rschew and Tenela during the first 3 days of sub-zero acclimation at -3 °C. The results indicate that gene expression during sub-zero acclimation follows a tighly controlled time-course. Especially AP2/EREBP and WRKY TFs may be important regulators of sub-zero acclimation, although the CBF signal transduction pathway seems to be less important during sub-zero than during cold acclimation. Globally, we estimate that approximately 5% of all Arabidopsis genes are regulated during sub-zero acclimation. Particularly photosynthesis-related genes are down-regulated and genes belonging to the functional classes of cell wall biosynthesis, hormone metabolism and RNA regulation of transcription are up-regulated. Collectively, these data provide the first global analysis of gene expression during sub-zero acclimation and allow the identification of candidate genes for forward and reverse genetic studies into the molecular mechanisms of sub-zero acclimation.

  16. Removal of singularity in radial Langmuir probe models for non-zero ion temperature

    NASA Astrophysics Data System (ADS)

    Regodón, Guillermo Fernando; Fernández Palop, José Ignacio; Tejero-del-Caz, Antonio; Díaz-Cabrera, Juan Manuel; Carmona-Cabezas, Rafael; Ballesteros, Jerónimo

    2017-10-01

    We solve a radial theoretical model that describes the ion sheath around a cylindrical Langmuir probe with finite non-zero ion temperature in which singularity in an a priori unknown point prevents direct integration. The singularity appears naturally in fluid models when the velocity of the ions reaches the local ion speed of sound. The solutions are smooth and continuous and are valid from the plasma to the probe with no need for asymptotic matching. The solutions that we present are valid for any value of the positive ion to electron temperature ratio and for any constant polytropic coefficient. The model is numerically solved to obtain the electric potential and the ion population density profiles for any given positive ion current collected by the probe. The ion-current to probe-voltage characteristic curves and the Sonin plot are calculated in order to use the results of the model in plasma diagnosis. The proposed methodology is adaptable to other geometries and in the presence of other presheath mechanisms.

  17. Applying the zero-inflated Poisson model with random effects to detect abnormal rises in school absenteeism indicating infectious diseases outbreak.

    PubMed

    Song, X X; Zhao, Q; Tao, T; Zhou, C M; Diwan, V K; Xu, B

    2018-05-30

    Records of absenteeism from primary schools are valuable data for infectious diseases surveillance. However, the analysis of the absenteeism is complicated by the data features of clustering at zero, non-independence and overdispersion. This study aimed to generate an appropriate model to handle the absenteeism data collected in a European Commission granted project for infectious disease surveillance in rural China and to evaluate the validity and timeliness of the resulting model for early warnings of infectious disease outbreak. Four steps were taken: (1) building a 'well-fitting' model by the zero-inflated Poisson model with random effects (ZIP-RE) using the absenteeism data from the first implementation year; (2) applying the resulting model to predict the 'expected' number of absenteeism events in the second implementation year; (3) computing the differences between the observations and the expected values (O-E values) to generate an alternative series of data; (4) evaluating the early warning validity and timeliness of the observational data and model-based O-E values via the EARS-3C algorithms with regard to the detection of real cluster events. The results indicate that ZIP-RE and its corresponding O-E values could improve the detection of aberrations, reduce the false-positive signals and are applicable to the zero-inflated data.

  18. Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.

    PubMed

    Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J

    2017-06-01

    Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.

  19. Lower Current Large Deviations for Zero-Range Processes on a Ring

    NASA Astrophysics Data System (ADS)

    Chleboun, Paul; Grosskinsky, Stefan; Pizzoferrato, Andrea

    2017-04-01

    We study lower large deviations for the current of totally asymmetric zero-range processes on a ring with concave current-density relation. We use an approach by Jensen and Varadhan which has previously been applied to exclusion processes, to realize current fluctuations by travelling wave density profiles corresponding to non-entropic weak solutions of the hyperbolic scaling limit of the process. We further establish a dynamic transition, where large deviations of the current below a certain value are no longer typically attained by non-entropic weak solutions, but by condensed profiles, where a non-zero fraction of all the particles accumulates on a single fixed lattice site. This leads to a general characterization of the rate function, which is illustrated by providing detailed results for four generic examples of jump rates, including constant rates, decreasing rates, unbounded sublinear rates and asymptotically linear rates. Our results on the dynamic transition are supported by numerical simulations using a cloning algorithm.

  20. disorder effect on quantum transport properties of ultra thin Fe film

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaotian; Nakamura, Kohji; Shindou, Ryuichi

    2015-03-01

    Ferromagnetic ultrathin films are experimentally known to often exhibit perpendicular magnetic anisotropy, when being placed on certain substrates. Based on reported ab-initio band calculations of free-standing Fe-monolayer and that on MgO substrate, we will introduce an effective tight-binding model, which capture a part of an electronic structure near Fermi level for both cases. We will show that the model supports electronic bands with non-zero Chern number and chiral edge modes which cross a direct band gap on the order of 50meV. Unluckily, however, the direct band gap is also masked by another dispersive bands which have non-zero Berry's curvature in the k-space. To demonstrate how disorder kills conducting characters of the latter bulk bands while leave intact those of the chiral edge modes, we will clarify behaviors of localization length and conductance in the effective model with on-site disorders.

  1. Anomaly-free dark matter models are not so simple

    NASA Astrophysics Data System (ADS)

    Ellis, John; Fairbairn, Malcolm; Tunney, Patrick

    2017-08-01

    We explore the anomaly-cancellation constraints on simplified dark matter (DM) models with an extra U(1)' gauge boson Z '. We show that, if the Standard Model (SM) fermions are supplemented by a single DM fermion χ that is a singlet of the SM gauge group, and the SM quarks have non-zero U(1)' charges, the SM leptons must also have non-zero U(1)' charges, in which case LHC searches impose strong constraints on the Z ' mass. Moreover, the DM fermion χ must have a vector-like U(1)' coupling. If one requires the DM particle to have a purely axial U(1)' coupling, which would be the case if χ were a Majorana fermion and would reduce the impact of direct DM searches, the simplest possibility is that it is accompanied by one other new singlet fermion, but in this case the U(1)' charges of the SM leptons still do not vanish. This is also true in a range of models with multiple new singlet fermions with identical charges. Searching for a leptophobic model, we then introduce extra fermions that transform non-trivially under the SM gauge group. We find several such models if the DM fermion is accompanied by two or more other new fermions with non-identical charges, which may have interesting experimental signatures. We present benchmark representatives of the various model classes we discuss.

  2. SMALL POPULATIONS REQUIRE SPECIFIC MODELING APPROACHES FOR ASSESSING RISK

    EPA Science Inventory

    All populations face non-zero risks of extinction. However, the risks for small populations, and therefore the modeling approaches necessary to predict them, are different from those of large populations. These differences are currently hindering assessment of risk to small pop...

  3. Calibration of short rate term structure models from bid-ask coupon bond prices

    NASA Astrophysics Data System (ADS)

    Gomes-Gonçalves, Erika; Gzyl, Henryk; Mayoral, Silvia

    2018-02-01

    In this work we use the method of maximum entropy in the mean to provide a model free, non-parametric methodology that uses only market data to provide the prices of the zero coupon bonds, and then, a term structure of the short rates. The data used consists of the prices of the bid-ask ranges of a few coupon bonds quoted in the market. The prices of the zero coupon bonds obtained in the first stage, are then used as input to solve a recursive set of equations to determine a binomial recombinant model of the short term structure of the interest rates.

  4. Clockwork seesaw mechanisms

    NASA Astrophysics Data System (ADS)

    Park, Seong Chan; Shin, Chang Sub

    2018-01-01

    We propose new mechanisms for small neutrino masses based on clockwork mechanism. The Standard Model neutrinos and lepton number violating operators communicate through the zero mode of clockwork gears, one of the two couplings of the zero mode is exponentially suppressed by clockwork mechanism. Including all known examples for the clockwork realization of the neutrino masses, different types of models are realized depending on the profile and chirality of the zero mode fermion. Each type of realization would have phenomenologically distinctive features with the accompanying heavy neutrinos.

  5. Plasminogen activator activity in tears of pregnant women.

    PubMed

    Csutak, Adrienne; Steiber, Zita; Tőzsér, József; Jakab, Attila; Berta, András; Silver, David M

    2017-01-01

    Plasminogen activator activity (PAA) in tears of pregnant women was investigated at various gestation times to assess the availability of plasminogen activator for aiding potential corneal wound healing processes during pregnancy. PAA was measured by a spectrophotometric method. The analysis used 91 tear samples from pregnant and non-pregnant women, supplemented with 10 additional tear PAA measurements from non-pregnant women obtained in a previous study. Tear levels of PAA in pregnant women formed a bimodal distribution. Either the tear PAA level was zero or non-zero during pregnancy. When non-zero, the tear PAA level was dissociated from gestation time and not different than non-pregnant and post-pregnant levels. The frequency of occurrence of zero level tear PAA increased with gestation: 16%, 17% and 46% had zero tear PAA in samples taken from women in the first, second and third trimester, respectively. Overall, of the tear samples taken from women during pregnancy, a total of 26% were at zero tear PAA. The remaining tear samples had non-zero tear PAA values throughout gestation equivalent to non-pregnant tear PAA values, suggesting local control of the source of PAA in tears. Given the importance of the plasminogen activator system in tears to wound healing in the cornea, and the high occurrence of zero tear PAA in our sample of pregnant women, elective corneal surgery would be contraindicated. If corneal surgery is nevertheless necessary, the tear PAA level would be worth checking and patients with low level should be closely observed during the postoperative period.

  6. More about unphysical zeroes in quark mass matrices

    NASA Astrophysics Data System (ADS)

    Emmanuel-Costa, David; González Felipe, Ricardo

    2017-01-01

    We look for all weak bases that lead to texture zeroes in the quark mass matrices and contain a minimal number of parameters in the framework of the standard model. Since there are ten physical observables, namely, six nonvanishing quark masses, three mixing angles and one CP phase, the maximum number of texture zeroes in both quark sectors is altogether nine. The nine zero entries can only be distributed between the up- and down-quark sectors in matrix pairs with six and three texture zeroes or five and four texture zeroes. In the weak basis where a quark mass matrix is nonsingular and has six zeroes in one sector, we find that there are 54 matrices with three zeroes in the other sector, obtainable through right-handed weak basis transformations. It is also found that all pairs composed of a nonsingular matrix with five zeroes and a nonsingular and nondecoupled matrix with four zeroes simply correspond to a weak basis choice. Without any further assumptions, none of these pairs of up- and down-quark mass matrices has physical content. It is shown that all non-weak-basis pairs of quark mass matrices that contain nine zeroes are not compatible with current experimental data. The particular case of the so-called nearest-neighbour-interaction pattern is also discussed.

  7. Roles of epsilon-near-zero (ENZ) and mu-near-zero (MNZ) materials in optical metatronic circuit networks.

    PubMed

    Abbasi, Fereshteh; Engheta, Nader

    2014-10-20

    The concept of metamaterial-inspired nanocircuits, dubbed metatronics, was introduced in [Science 317, 1698 (2007); Phys. Rev. Lett. 95, 095504 (2005)]. It was suggested how optical lumped elements (nanoelements) can be made using subwavelength plasmonic or non-plasmonic particles. As a result, the optical metatronic equivalents of a number of electronic circuits, such as frequency mixers and filters, were suggested. In this work we further expand the concept of electronic lumped element networks into optical metatronic circuits and suggest a conceptual model applicable to various metatronic passive networks. In particular, we differentiate between the series and parallel networks using epsilon-near-zero (ENZ) and mu-near-zero (MNZ) materials. We employ layered structures with subwavelength thicknesses for the nanoelements as the building blocks of collections of metatronic networks. Furthermore, we explore how by choosing the non-zero constitutive parameters of the materials with specific dispersions, either Drude or Lorentzian dispersion with suitable parameters, capacitive and inductive responses can be achieved in both series and parallel networks. Next, we proceed with the one-to-one analogy between electronic circuits and optical metatronic filter layered networks and justify our analogies by comparing the frequency response of the two paradigms. Finally, we examine the material dispersion of near-zero relative permittivity as well as other physically important material considerations such as losses.

  8. Calibration of a rotating accelerometer gravity gradiometer using centrifugal gradients

    NASA Astrophysics Data System (ADS)

    Yu, Mingbiao; Cai, Tijing

    2018-05-01

    The purpose of this study is to calibrate scale factors and equivalent zero biases of a rotating accelerometer gravity gradiometer (RAGG). We calibrate scale factors by determining the relationship between the centrifugal gradient excitation and RAGG response. Compared with calibration by changing the gravitational gradient excitation, this method does not need test masses and is easier to implement. The equivalent zero biases are superpositions of self-gradients and the intrinsic zero biases of the RAGG. A self-gradient is the gravitational gradient produced by surrounding masses, and it correlates well with the RAGG attitude angle. We propose a self-gradient model that includes self-gradients and the intrinsic zero biases of the RAGG. The self-gradient model is a function of the RAGG attitude, and it includes parameters related to surrounding masses. The calibration of equivalent zero biases determines the parameters of the self-gradient model. We provide detailed procedures and mathematical formulations for calibrating scale factors and parameters in the self-gradient model. A RAGG physical simulation system substitutes for the actual RAGG in the calibration and validation experiments. Four point masses simulate four types of surrounding masses producing self-gradients. Validation experiments show that the self-gradients predicted by the self-gradient model are consistent with those from the outputs of the RAGG physical simulation system, suggesting that the presented calibration method is valid.

  9. A text zero-watermarking method based on keyword dense interval

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Zhu, Yuesheng; Jiang, Yifeng; Qing, Yin

    2017-07-01

    Digital watermarking has been recognized as a useful technology for the copyright protection and authentication of digital information. However, rarely did the former methods focus on the key content of digital carrier. The idea based on the protection of key content is more targeted and can be considered in different digital information, including text, image and video. In this paper, we use text as research object and a text zero-watermarking method which uses keyword dense interval (KDI) as the key content is proposed. First, we construct zero-watermarking model by introducing the concept of KDI and giving the method of KDI extraction. Second, we design detection model which includes secondary generation of zero-watermark and the similarity computing method of keyword distribution. Besides, experiments are carried out, and the results show that the proposed method gives better performance than other available methods especially in the attacks of sentence transformation and synonyms substitution.

  10. Adaptation of non-linear mixed amount with zero amount response surface model for analysis of concentration-dependent synergism and safety with midazolam, alfentanil, and propofol sedation.

    PubMed

    Liou, J-Y; Ting, C-K; Teng, W-N; Mandell, M S; Tsou, M-Y

    2018-06-01

    The non-linear mixed amount with zero amounts response surface model can be used to describe drug interactions and predict loss of response to noxious stimuli and respiratory depression. We aimed to determine whether this response surface model could be used to model sedation with the triple drug combination of midazolam, alfentanil and propofol. Sedation was monitored in 56 patients undergoing gastrointestinal endoscopy (modelling group) using modified alertness/sedation scores. A total of 227 combinations of effect-site concentrations were derived from pharmacokinetic models. Accuracy and the area under the receiver operating characteristic curve were calculated. Accuracy was defined as an absolute difference <0.5 between the binary patient responses and the predicted probability of loss of responsiveness. Validation was performed with a separate group (validation group) of 47 patients. Effect-site concentration ranged from 0 to 108 ng ml -1 for midazolam, 0-156 ng ml -1 for alfentanil, and 0-2.6 μg ml -1 for propofol in both groups. Synergy was strongest with midazolam and alfentanil (24.3% decrease in U 50 , concentration for half maximal drug effect). Adding propofol, a third drug, offered little additional synergy (25.8% decrease in U 50 ). Two patients (3%) experienced respiratory depression. Model accuracy was 83% and 76%, area under the curve was 0.87 and 0.80 for the modelling and validation group, respectively. The non-linear mixed amount with zero amounts triple interaction response surface model predicts patient sedation responses during endoscopy with combinations of midazolam, alfentanil, or propofol that fall within clinical use. Our model also suggests a safety margin of alfentanil fraction <0.12 that avoids respiratory depression after loss of responsiveness. Copyright © 2018 British Journal of Anaesthesia. Published by Elsevier Ltd. All rights reserved.

  11. Towards overcoming the Monte Carlo sign problem with tensor networks

    NASA Astrophysics Data System (ADS)

    Bañuls, Mari Carmen; Cichy, Krzysztof; Ignacio Cirac, J.; Jansen, Karl; Kühn, Stefan; Saito, Hana

    2017-03-01

    The study of lattice gauge theories with Monte Carlo simulations is hindered by the infamous sign problem that appears under certain circumstances, in particular at non-zero chemical potential. So far, there is no universal method to overcome this problem. However, recent years brought a new class of non-perturbative Hamiltonian techniques named tensor networks, where the sign problem is absent. In previous work, we have demonstrated that this approach, in particular matrix product states in 1+1 dimensions, can be used to perform precise calculations in a lattice gauge theory, the massless and massive Schwinger model. We have computed the mass spectrum of this theory, its thermal properties and real-time dynamics. In this work, we review these results and we extend our calculations to the case of two flavours and non-zero chemical potential. We are able to reliably reproduce known analytical results for this model, thus demonstrating that tensor networks can tackle the sign problem of a lattice gauge theory at finite density.

  12. Non-zero mean and asymmetry of neuronal oscillations have different implications for evoked responses.

    PubMed

    Nikulin, Vadim V; Linkenkaer-Hansen, Klaus; Nolte, Guido; Curio, Gabriel

    2010-02-01

    The aim of the present study was to show analytically and with simulations that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. Secondly, we detail the rationale and implementation of the "baseline-shift index" (BSI) for deducing whether empirical oscillations have non-zero mean. Finally, we illustrate with empirical data why the "amplitude fluctuation asymmetry" (AFA) index should be used with caution in research aimed at explaining variability in evoked responses through a mechanism of amplitude modulation of ongoing oscillations. An analytical approach, simulations and empirical MEG data were used to compare the specificity of BSI and AFA index to differentiate between a non-zero mean and a non-sinusoidal shape of neuronal oscillations. Both the BSI and the AFA index were sensitive to the presence of non-zero mean in neuronal oscillations. The AFA index, however, was also sensitive to the shape of oscillations even when they had a zero mean. Our findings indicate that it is the non-zero mean of neuronal oscillations, and not an amplitude asymmetry of peaks and troughs, that is a prerequisite for the generation of evoked responses through a mechanism of amplitude modulation of oscillations. A clear distinction should be made between the shape and non-zero mean properties of neuronal oscillations. This is because only the latter contributes to evoked responses, whereas the former does not. Copyright (c) 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Helicopter Controllability

    DTIC Science & Technology

    1989-09-01

    106 3. Program CC Systems Technology, Inc. (STI) of Hawthorne, CA., develops and markets PC control system analysis and design software including...is marketed in Palo Alto, Ca., by Applied i and can be used for both linear and non- linear control system analysis. Using TUTSIM involves developing...gravity centroid ( ucg ) can be calculated as 112 n m pi - 2 zi acg n i (7-5) where pi = poles zi = zeroes n = number of poles m = number of zeroes If K

  14. Mass, radius and composition of the outer crust of nonaccreting cold neutron stars

    NASA Astrophysics Data System (ADS)

    Hempel, Matthias; Schaffner-Bielich, Jürgen

    2008-01-01

    The properties and composition of the outer crust of nonaccreting cold neutron stars are studied by applying the model of Baym, Pethick and Sutherland, which was extended by including higher order corrections of the atomic binding, screening, exchange and zero-point energy. The most recent experimental nuclear data from the atomic mass table of Audi, Wapstra and Thibault from 2003 are used. Extrapolation to the drip line is utilized by various state-of-the-art theoretical nuclear models (finite range droplet, relativistic nuclear field and non-relativistic Skyrme Hartree Fock parameterizations). The different nuclear models are compared with respect to the mass and radius of the outer crust for different neutron star configurations and the nuclear compositions of the outer crust.

  15. Photonic zero mode in a non-Hermitian photonic lattice.

    PubMed

    Pan, Mingsen; Zhao, Han; Miao, Pei; Longhi, Stefano; Feng, Liang

    2018-04-03

    Zero-energy particles (such as Majorana fermions) are newly predicted quasiparticles and are expected to play an important role in fault-tolerant quantum computation. In conventional Hermitian quantum systems, however, such zero states are vulnerable and even become vanishing if couplings with surroundings are of the same topological nature. Here we demonstrate a robust photonic zero mode sustained by a spatial non-Hermitian phase transition in a parity-time (PT) symmetric lattice, despite the same topological order across the entire system. The non-Hermitian-enhanced topological protection ensures the reemergence of the zero mode at the phase transition interface when the two semi-lattices under different PT phases are decoupled effectively in their real spectra. Residing at the midgap level of the PT symmetric spectrum, the zero mode is topologically protected against topological disorder. We experimentally validated the robustness of the zero-energy mode by ultrafast heterodyne measurements of light transport dynamics in a silicon waveguide lattice.

  16. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    NASA Astrophysics Data System (ADS)

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan; Hu, Bin

    2014-05-01

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H0 measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude AL to vary, we find AL > 1 at 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < -1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ8 is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.

  17. Anomalous glassy dynamics in simple models of dense biological tissue

    NASA Astrophysics Data System (ADS)

    Sussman, Daniel M.; Paoluzzi, M.; Marchetti, M. Cristina; Manning, M. Lisa

    2018-02-01

    In order to understand the mechanisms for glassy dynamics in biological tissues and shed light on those in non-biological materials, we study the low-temperature disordered phase of 2D vertex-like models. Recently it has been noted that vertex models have quite unusual behavior in the zero-temperature limit, with rigidity transitions that are controlled by residual stresses and therefore exhibit very different scaling and phenomenology compared to particulate systems. Here we investigate the finite-temperature phase of two-dimensional Voronoi and Vertex models, and show that they have highly unusual, sub-Arrhenius scaling of dynamics with temperature. We connect the anomalous glassy dynamics to features of the potential energy landscape associated with zero-temperature inherent states.

  18. A Comparison of Mathematical Models of Fish Mercury Concentration as a Function of Atmospheric Mercury Deposition Rate and Watershed Characteristics

    NASA Astrophysics Data System (ADS)

    Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.

    2009-12-01

    Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.

  19. Quantum Capacity under Adversarial Quantum Noise: Arbitrarily Varying Quantum Channels

    NASA Astrophysics Data System (ADS)

    Ahlswede, Rudolf; Bjelaković, Igor; Boche, Holger; Nötzel, Janis

    2013-01-01

    We investigate entanglement transmission over an unknown channel in the presence of a third party (called the adversary), which is enabled to choose the channel from a given set of memoryless but non-stationary channels without informing the legitimate sender and receiver about the particular choice that he made. This channel model is called an arbitrarily varying quantum channel (AVQC). We derive a quantum version of Ahlswede's dichotomy for classical arbitrarily varying channels. This includes a regularized formula for the common randomness-assisted capacity for entanglement transmission of an AVQC. Quite surprisingly and in contrast to the classical analog of the problem involving the maximal and average error probability, we find that the capacity for entanglement transmission of an AVQC always equals its strong subspace transmission capacity. These results are accompanied by different notions of symmetrizability (zero-capacity conditions) as well as by conditions for an AVQC to have a capacity described by a single-letter formula. In the final part of the paper the capacity of the erasure-AVQC is computed and some light shed on the connection between AVQCs and zero-error capacities. Additionally, we show by entirely elementary and operational arguments motivated by the theory of AVQCs that the quantum, classical, and entanglement-assisted zero-error capacities of quantum channels are generically zero and are discontinuous at every positivity point.

  20. Three-dimensional wave evolution on electrified falling films

    NASA Astrophysics Data System (ADS)

    Tomlin, Ruben; Papageorgiou, Demetrios; Pavliotis, Greg

    2016-11-01

    We consider the full three-dimensional model for a thin viscous liquid film completely wetting a flat infinite solid substrate at some non-zero angle to the horizontal, with an electric field normal to the substrate far from the flow. Thin film flows have applications in cooling processes. Many studies have shown that the presence of interfacial waves increases heat transfer by orders of magnitude due to film thinning and convection effects. A long-wave asymptotics procedure yields a Kuramoto-Sivashinsky equation with a non-local term to model the weakly nonlinear evolution of the interface dynamics for overlying film arrangements, with a restriction on the electric field strength. The non-local term is always linearly destabilising and produces growth rates proportional to the cube of the magnitude of the wavenumber vector. A sufficiently strong electric field is able promote non-trivial dynamics for subcritical Reynolds number flows where the flat interface is stable in the absence of an electric field. We present numerical simulations where we observe rich dynamical behavior with competing attractors, including "snaking" travelling waves and other fully three-dimensional wave formations. EPSRC studentship (RJT).

  1. Fast-scale non-linear distortion analysis of peak-current-controlled buck-boost inverters

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Dong, Shuai; Yi, Chuanzhi; Guan, Weimin

    2018-02-01

    This paper deals with fast-scale non-linear distortion behaviours including asymmetrical period-doubling bifurcation and zero-crossing distortion in peak-current-controlled buck-boost inverters. The underlying mechanisms of the fast-scale non-linear distortion behaviours in inverters are revealed. The folded bifurcation diagram is presented to analyse the asymmetrical phenomenon of fast-scale period-doubling bifurcation. In view of the effect of phase shift and current ripple, the analytical expressions for one pair of critical phase angles are derived by using the design-oriented geometrical current approach. It is shown that the phase shift between inductor current and capacitor voltage should be responsible for the zero-crossing distortion phenomenon. These results obtained here are useful to optimise the circuit design and improve the circuit performance.

  2. Impact of rainfall intensity on the transport of two herbicides in undisturbed grassed filter strip soil cores

    NASA Astrophysics Data System (ADS)

    Pot, V.; Šimůnek, J.; Benoit, P.; Coquet, Y.; Yra, A.; Martínez-Cordón, M.-J.

    2005-12-01

    Two series of displacement experiments with isoproturon and metribuzin herbicides were performed on two undisturbed grassed filter strip soil cores, under unsaturated steady-state flow conditions. Several rainfall intensities (0.070, 0.147, 0.161, 0.308 and 0.326 cm h - 1 ) were used. A water tracer (bromide) was simultaneously injected in each displacement experiment. A descriptive analysis of experimental breakthrough curves of bromide and herbicides combined with a modeling analysis showed an impact of rainfall intensity on the solute transport. Two contrasting physical non-equilibrium transport processes occurred. Multiple (three) porosity domains contributed to flow at the highest rainfall intensities, including preferential flow through macropore pathways. Macropores were not active any longer at intermediate and lowest velocities, and the observed preferential transport was described using dual-porosity-type models with a zero or low flow in the matrix domain. Chemical non-equilibrium transport of herbicides was found at all rainfall intensities. Significantly higher estimated values of degradation rate parameters as compared to batch data were correlated with the degree of non-equilibrium sorption. Experimental breakthrough curves were analyzed using different physical and chemical equilibrium and non-equilibrium transport models: convective-dispersive model (CDE), dual-porosity model (MIM), dual-permeability model (DP), triple-porosity, dual permeability model (DP-MIM); each combined with both chemical instantaneous and kinetic sorption.

  3. Phase diagram of q-deformed Yang-Mills theory on S 2 at non-zero θ-angle

    NASA Astrophysics Data System (ADS)

    Okuyama, Kazumi

    2018-04-01

    We study the phase diagram of q-deformed Yang-Mills theory on S 2 at non-zero θ-angle using the exact partition function at finite N . By evaluating the exact partition function numerically, we find evidence for the existence of a series of phase transitions at non-zero θ-angle as conjectured in [hep-th/0509004

  4. Stable diffraction-management soliton in a periodic structure with alternating left-handed and right-handed media

    NASA Astrophysics Data System (ADS)

    Zhang, Jinggui

    2017-09-01

    In this paper, we first derive a modified two-dimensional non-linear Schrödinger equation including high-order diffraction (HOD) suitable for the propagation of optical beam near the low-diffraction regime in Kerr non-linear media with spatial dispersion. Then, we apply our derived physical model to a designed two-dimensional configuration filled with alternate layers of a left-handed material (LHM) and a right-handed media by employing the mean-field theory. It is found that the periodic structure including LHM may experience diminished, cancelled, and even reversed diffraction behaviours through engineering the relative thickness between both media. In particular, the variational method analytically predicts that close to the zero-diffraction regime, such periodic structure can support stable diffraction-management solitons whose beamwidth and peak amplitude evolve periodically with the help of HOD effect. Numerical simulation based on the split-step Fourier method confirms the analytical results.

  5. Dual simulation of the massless lattice Schwinger model with topological term and non-zero chemical potential

    NASA Astrophysics Data System (ADS)

    Göschl, Daniel

    2018-03-01

    We discuss simulation strategies for the massless lattice Schwinger model with a topological term and finite chemical potential. The simulation is done in a dual representation where the complex action problem is solved and the partition function is a sum over fermion loops, fermion dimers and plaquette-occupation numbers. We explore strategies to update the fermion loops coupled to the gauge degrees of freedom and check our results with conventional simulations (without topological term and at zero chemical potential), as well as with exact summation on small volumes. Some physical implications of the results are discussed.

  6. Control of nitrification/denitrification in an onsite two-chamber intermittently aerated membrane bioreactor with alkalinity and carbon addition: Model and experiment.

    PubMed

    Perera, Mahamalage Kusumitha; Englehardt, James D; Tchobanoglous, George; Shamskhorzani, Reza

    2017-05-15

    Denitrifying membrane bioreactors (MBRs) are being found useful in water reuse treatment systems, including net-zero water (nearly closed-loop), non-reverse osmosis-based, direct potable reuse (DPR) systems. In such systems nitrogen may need to be controlled in the MBR to meet the nitrate drinking water standard in the finished water. To achieve efficient nitrification and denitrification, the addition of alkalinity and external carbon may be required, and control of the carbon feed rate is then important. In this work, an onsite, two-chamber aerobic nitrifying/denitrifying MBR, representing one unit process of a net-zero water, non-reverse osmosis-based DPR system, was modeled as a basis for control of the MBR internal recycling rate, aeration rate, and external carbon feed rate. Specifically, a modification of the activated sludge model ASM2dSMP was modified further to represent the rate of recycling between separate aerobic and anoxic chambers, rates of carbon and alkalinity feed, and variable aeration schedule, and was demonstrated versus field data. The optimal aeration pattern for the modeled reactor configuration and influent matrix was found to be 30 min of aeration in a 2 h cycle (104 m 3 air/d per 1 m 3 /d average influent), to ultimately meet the nitrate drinking water standard. Optimal recycling ratios (inter-chamber flow to average daily flow) were found to be 1.5 and 3 during rest and mixing periods, respectively. The model can be used to optimize aeration pattern and recycling ratio in such MBRs, with slight modifications to reflect reactor configuration, influent matrix, and target nitrogen species concentrations, though some recalibration may be required. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. How the flow affects the phase behaviour and microstructure of polymer nanocomposites.

    PubMed

    Stephanou, Pavlos S

    2015-02-14

    We address the issue of flow effects on the phase behaviour of polymer nanocomposite melts by making use of a recently reported Hamiltonian set of evolution equations developed on principles of non-equilibrium thermodynamics. To this end, we calculate the spinodal curve, by computing values for the nanoparticle radius as a function of the polymer radius-of-gyration for which the second derivative of the generalized free energy of the system becomes zero. Under equilibrium conditions, we recover the phase diagram predicted by Mackay et al. [Science 311, 1740 (2006)]. Under non-equilibrium conditions, we account for the extra terms in the free energy due to changes in the conformations of polymer chains by the shear flow. Overall, our model predicts that flow enhances miscibility, since the corresponding miscibility window opens up for non-zero shear rate values.

  8. A review and comparison of Bayesian and likelihood-based inferences in beta regression and zero-or-one-inflated beta regression.

    PubMed

    Liu, Fang; Eugenio, Evercita C

    2018-04-01

    Beta regression is an increasingly popular statistical technique in medical research for modeling of outcomes that assume values in (0, 1), such as proportions and patient reported outcomes. When outcomes take values in the intervals [0,1), (0,1], or [0,1], zero-or-one-inflated beta (zoib) regression can be used. We provide a thorough review on beta regression and zoib regression in the modeling, inferential, and computational aspects via the likelihood-based and Bayesian approaches. We demonstrate the statistical and practical importance of correctly modeling the inflation at zero/one rather than ad hoc replacing them with values close to zero/one via simulation studies; the latter approach can lead to biased estimates and invalid inferences. We show via simulation studies that the likelihood-based approach is computationally faster in general than MCMC algorithms used in the Bayesian inferences, but runs the risk of non-convergence, large biases, and sensitivity to starting values in the optimization algorithm especially with clustered/correlated data, data with sparse inflation at zero and one, and data that warrant regularization of the likelihood. The disadvantages of the regular likelihood-based approach make the Bayesian approach an attractive alternative in these cases. Software packages and tools for fitting beta and zoib regressions in both the likelihood-based and Bayesian frameworks are also reviewed.

  9. Highly dissipative Hénon map behavior in the four-level model of the CO 2 laser with modulated losses

    NASA Astrophysics Data System (ADS)

    Pando L., C. L.; Acosta, G. A. Luna; Meucci, R.; Ciofini, M.

    1995-02-01

    We show that the four-level model for the CO 2 laser with modulated losses behaves in a qualitatively similar way as the highly dissipative Hénon map. The ubiquity of elements of the universal sequence, their related symbolic dynamics, and the presence of reverse bifurcations of chaotic bands in the model are reminiscent of the logistic map which is the limit of the Hénon map when the Jacobian equals zero. The coexistence of attractors, its dynamics related to contraction of volumes in phase space and the associated return maps can be correlated with those of the highly dissipative Hénon map.

  10. When Long-Range Zero-Lag Synchronization is Feasible in Cortical Networks

    PubMed Central

    Viriyopase, Atthaphon; Bojak, Ingo; Zeitler, Magteld; Gielen, Stan

    2012-01-01

    Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the beta and gamma bands with frequencies in the range of 14–30 and 40–80 Hz, respectively. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, such as for neuronal communication, attention, memory, and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization may occur with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions. We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo–Strogatz neurons and type II Hodgkin–Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP) and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin–Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP. PMID:22866034

  11. Shot-noise at a Fermi-edge singularity: Non-Markovian dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ubbelohde, N.; Maire, N.; Haug, R. J.

    2013-12-04

    For an InAs quantum dot we study the current shot noise at a Fermi-edge singularity in low temperature cross-correlation measurements. In the regime of the interaction effect the strong suppression of noise observed at zero magnetic field and the sequence of enhancement and suppression in magnetic field go beyond a Markovian master equation model. Qualitative and quantitative agreement can however be achieved by a generalized master equation model taking non-Markovian dynamics into account.

  12. The Effects of Impurities on Protein Crystal Growth and Nucleation: A Preliminary Study

    NASA Technical Reports Server (NTRS)

    Schall, Constance A.

    1998-01-01

    Kubota and Mullin (1995) devised a simple model to account for the effects of impurities on crystal growth of small inorganic and organic molecules in aqueous solutions. Experimentally, the relative step velocity and crystal growth of these molecules asymptotically approach zero or non-zero values with increasing concentrations of impurities. Alternatively, the step velocity and crystal growth can linearly approach zero as the impurity concentration increases. The Kubota-Mullin model assumes that the impurity exhibits Langmuirian adsorption onto the crystal surface. Decreases in step velocities and subsequent growth rates are related to the fractional coverage (theta) of the crystal surface by adsorbed impurities; theta = Kx / (I +Kx), x = mole fraction of impurity in solution. In the presence of impurities, the relative step velocity, V/Vo, and the relative growth rate of a crystal face, G/Go, are proposed to conform to the following equations: V/Vo approx. = G/Go = 1 - (alpha)(theta). The adsorption of impurity is assumed to be rapid and in quasi-equilibrium with the crystal surface sites available. When the value of alpha, an effectiveness factor, is one the growth will asymptotically approach zero with increasing concentrations of impurity. At values less than one, growth approaches a non-zero value asymptotically. When alpha is much greater than one, there will be a linear relationship between impurity concentration and growth rates. Kubota and Mullin expect alpha to decrease with increasing supersaturation and shrinking size of a two dimensional nucleus. It is expected that impurity effects on protein crystal growth will exhibit behavior similar to that of impurities in small molecule growth. A number of proteins were added to purified chicken egg white lysozyme, the effect on crystal nucleation and growth assessed.

  13. Exactly solved mixed spin-(1,1/2) Ising-Heisenberg diamond chain with a single-ion anisotropy

    NASA Astrophysics Data System (ADS)

    Lisnyi, Bohdan; Strečka, Jozef

    2015-03-01

    The mixed spin-(1,1/2) Ising-Heisenberg diamond chain with a single-ion anisotropy is exactly solved through the generalized decoration-iteration transformation and the transfer-matrix method. The decoration-iteration transformation is first used for establishing a rigorous mapping equivalence with the corresponding spin-1 Blume-Emery-Griffiths chain, which is subsequently exactly treated within the transfer-matrix technique. Apart from three classical ground states the model exhibits three striking quantum ground states in which a singlet-dimer state of the interstitial Heisenberg spins is accompanied either with a frustrated state or a polarized state or a non-magnetic state of the nodal Ising spins. It is evidenced that two magnetization plateaus at zero and/or one-half of the saturation magnetization may appear in low-temperature magnetization curves. The specific heat may display remarkable temperature dependences with up to three and four distinct round maxima in a zero and non-zero magnetic field, respectively.

  14. Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Holster, Trevor A.; Lake, J.

    2016-01-01

    Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…

  15. The influence of further-neighbor spin-spin interaction on a ground state of 2D coupled spin-electron model in a magnetic field

    NASA Astrophysics Data System (ADS)

    Čenčariková, Hana; Strečka, Jozef; Gendiar, Andrej; Tomašovičová, Natália

    2018-05-01

    An exhaustive ground-state analysis of extended two-dimensional (2D) correlated spin-electron model consisting of the Ising spins localized on nodal lattice sites and mobile electrons delocalized over pairs of decorating sites is performed within the framework of rigorous analytical calculations. The investigated model, defined on an arbitrary 2D doubly decorated lattice, takes into account the kinetic energy of mobile electrons, the nearest-neighbor Ising coupling between the localized spins and mobile electrons, the further-neighbor Ising coupling between the localized spins and the Zeeman energy. The ground-state phase diagrams are examined for a wide range of model parameters for both ferromagnetic as well as antiferromagnetic interaction between the nodal Ising spins and non-zero value of external magnetic field. It is found that non-zero values of further-neighbor interaction leads to a formation of new quantum states as a consequence of competition between all considered interaction terms. Moreover, the new quantum states are accompanied with different magnetic features and thus, several kinds of field-driven phase transitions are observed.

  16. DEsingle for detecting three types of differential expression in single-cell RNA-seq data.

    PubMed

    Miao, Zhun; Deng, Ke; Wang, Xiaowo; Zhang, Xuegong

    2018-04-24

    The excessive amount of zeros in single-cell RNA-seq data include "real" zeros due to the on-off nature of gene transcription in single cells and "dropout" zeros due to technical reasons. Existing differential expression (DE) analysis methods cannot distinguish these two types of zeros. We developed an R package DEsingle which employed Zero-Inflated Negative Binomial model to estimate the proportion of real and dropout zeros and to define and detect 3 types of DE genes in single-cell RNA-seq data with higher accuracy. The R package DEsingle is freely available at https://github.com/miaozhun/DEsingle and is under Bioconductor's consideration now. zhangxg@tsinghua.edu.cn. Supplementary data are available at Bioinformatics online.

  17. A New Ductility Exhaustion Model for High Temperature Low Cycle Fatigue Life Prediction of Turbine Disk Alloys

    NASA Astrophysics Data System (ADS)

    Zhu, Shun-Peng; Huang, Hong-Zhong; Li, Haiqing; Sun, Rui; Zuo, Ming J.

    2011-06-01

    Based on ductility exhaustion theory and the generalized energy-based damage parameter, a new viscosity-based life prediction model is introduced to account for the mean strain/stress effects in the low cycle fatigue regime. The loading waveform parameters and cyclic hardening effects are also incorporated within this model. It is assumed that damage accrues by means of viscous flow and ductility consumption is only related to plastic strain and creep strain under high temperature low cycle fatigue conditions. In the developed model, dynamic viscosity is used to describe the flow behavior. This model provides a better prediction of Superalloy GH4133's fatigue behavior when compared to Goswami's ductility model and the generalized damage parameter. Under non-zero mean strain conditions, moreover, the proposed model provides more accurate predictions of Superalloy GH4133's fatigue behavior than that with zero mean strains.

  18. Floating potential in electronegative plasmas for non-zero ion temperatures

    NASA Astrophysics Data System (ADS)

    Regodón, Guillermo Fernando; Fernández Palop, José Ignacio; Tejero-del-Caz, Antonio; Díaz-Cabrera, Juan Manuel; Carmona-Cabezas, Rafael; Ballesteros, Jerónimo

    2018-02-01

    The floating potential of a Langmuir probe immersed in an electronegative plasma is studied theoretically under the assumption of radial positive ion fluid movement for non-zero positive ion temperature: both cylindrical and spherical geometries are studied. The model is solvable exactly. The special characteristics of the electronegative pre-sheath are found and the influence of the stratified electronegative pre-sheath is shown to be very small in practical applications. It is suggested that the use of the floating potential in the measurement of negative ions population density is convenient, in view of the numerical results obtained. The differences between the two radial geometries, which become very important for small probe radii of the order of magnitude of the Debye length, are studied.

  19. Non-fixation for Conservative Stochastic Dynamics on the Line

    NASA Astrophysics Data System (ADS)

    Basu, Riddhipratim; Ganguly, Shirshendu; Hoffman, Christopher

    2018-03-01

    We consider activated random walk (ARW), a model which generalizes the stochastic sandpile, one of the canonical examples of self organized criticality. Informally ARW is a particle system on Z with mass conservation. One starts with a mass density {μ > 0} of initially active particles, each of which performs a symmetric random walk at rate one and falls asleep at rate {λ > 0}. Sleepy particles become active on coming in contact with other active particles. We investigate the question of fixation/non-fixation of the process and show for small enough {λ} the critical mass density for fixation is strictly less than one. Moreover, the critical density goes to zero as {λ} tends to zero. This settles a long standing open question.

  20. Angular Momentum Transport in Convectively Unstable Shear Flows

    NASA Astrophysics Data System (ADS)

    Käpylä, Petri J.; Brandenburg, Axel; Korpi, Maarit J.; Snellman, Jan E.; Narayan, Ramesh

    2010-08-01

    Angular momentum transport due to hydrodynamic turbulent convection is studied using local three-dimensional numerical simulations employing the shearing box approximation. We determine the turbulent viscosity from non-rotating runs over a range of values of the shear parameter and use a simple analytical model in order to extract the non-diffusive contribution (Λ-effect) to the stress in runs where rotation is included. Our results suggest that the turbulent viscosity is on the order of the mixing length estimate and weakly affected by rotation. The Λ-effect is non-zero and a factor of 2-4 smaller than the turbulent viscosity in the slow rotation regime. We demonstrate that for Keplerian shear, the angular momentum transport can change sign and be outward when the rotation period is greater than the turnover time, i.e., when the Coriolis number is below unity. This result seems to be relatively independent of the value of the Rayleigh number.

  1. Constraints on texture zero and cofactor zero models for neutrino mass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whisnant, K.; Liao, Jiajun; Marfatia, D.

    2014-06-24

    Imposing a texture or cofactor zero on the neutrino mass matrix reduces the number of independent parameters from nine to seven. Since five parameters have been measured, only two independent parameters would remain in such models. We find the allowed regions for single texture zero and single cofactor zero models. We also find strong similarities between single texture zero models with one mass hierarchy and single cofactor zero models with the opposite mass hierarchy. We show that this correspondence can be generalized to texture-zero and cofactor-zero models with the same homogeneous costraints on the elements and cofactors.

  2. eGSM: A extended Sky Model of Diffuse Radio Emission

    NASA Astrophysics Data System (ADS)

    Kim, Doyeon; Liu, Adrian; Switzer, Eric

    2018-01-01

    Both cosmic microwave background and 21cm cosmology observations must contend with astrophysical foreground contaminants in the form of diffuse radio emission. For precise cosmological measurements, these foregrounds must be accurately modeled over the entire sky Ideally, such full-sky models ought to be primarily motivated by observations. Yet in practice, these observations are limited, with data sets that are observed not only in a heterogenous fashion, but also over limited frequency ranges. Previously, the Global Sky Model (GSM) took some steps towards solving the problem of incomplete observational data by interpolating over multi-frequency maps using principal component analysis (PCA).In this poster, we present an extended version of GSM (called eGSM) that includes the following improvements: 1) better zero-level calibration 2) incorporation of non-uniform survey resolutions and sky coverage 3) the ability to quantify uncertainties in sky models 4) the ability to optimally select spectral models using Bayesian Evidence techniques.

  3. Numerical formulation for the prediction of solid/liquid change of a binary alloy

    NASA Technical Reports Server (NTRS)

    Schneider, G. E.; Tiwari, S. N.

    1990-01-01

    A computational model is presented for the prediction of solid/liquid phase change energy transport including the influence of free convection fluid flow in the liquid phase region. The computational model considers the velocity components of all non-liquid phase change material control volumes to be zero but fully solves the coupled mass-momentum problem within the liquid region. The thermal energy model includes the entire domain and uses an enthalpy like model and a recently developed method for handling the phase change interface nonlinearity. Convergence studies are performed and comparisons made with experimental data for two different problem specifications. The convergence studies indicate that grid independence was achieved and the comparison with experimental data indicates excellent quantitative prediction of the melt fraction evolution. Qualitative data is also provided in the form of velocity vector diagrams and isotherm plots for selected times in the evolution of both problems. The computational costs incurred are quite low by comparison with previous efforts on solving these problems.

  4. Time-resolved determination of the potential of zero charge at polycrystalline Au/ionic liquid interfaces

    NASA Astrophysics Data System (ADS)

    Vargas-Barbosa, Nella M.; Roling, Bernhard

    2018-05-01

    The potential of zero charge (PZC) is a fundamental property that describes the electrode/electrolyte interface. The determination of the PZC at electrode/ionic liquid interfaces has been challenging due to the lack of models that fully describe these complex interfaces as well as the non-standardized approaches used to characterize them. In this work, we present a method that combines electrode immersion transient and impedance measurements for the determination of the PZC. This combined approach allows the distinction of the potential of zero free charge (pzfc), related to fast double layer charging on a millisecond timescale, from a potential of zero charge on a timescale of tens of seconds related to slower ion transport processes at the interface. Our method highlights the complementarity of these electrochemical techniques and the importance of selecting the correct timescale to execute experiments and interpret the results.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakur, Pradeep; Durganandini, P.

    We study the spin-1/2 XX model in the presence of three-spin interactions of the XZX+YZY and XZY-YZX types. We solve the problem exactly and show that there is both finite magnetization and electric polarization for low non-zero strengths of the three-spin interactions.

  6. Ramjets: Airframe Integration

    DTIC Science & Technology

    2010-09-01

    nozzle • Brayton (or Joule) cycle: combustion at constant pressure at non-zero velocity The combustion process is modelled by means of adding heat to...against aerodynamic heating Aerodynamic heating calculations are based on: • Taylor -Maccoll method for compressible inviscid cone flow • Reynolds

  7. A Pole-Zero Filter Cascade Provides Good Fits to Human Masking Data and to Basilar Membrane and Neural Data

    NASA Astrophysics Data System (ADS)

    Lyon, Richard F.

    2011-11-01

    A cascade of two-pole-two-zero filters with level-dependent pole and zero dampings, with few parameters, can provide a good match to human psychophysical and physiological data. The model has been fitted to data on detection threshold for tones in notched-noise masking, including bandwidth and filter shape changes over a wide range of levels, and has been shown to provide better fits with fewer parameters compared to other auditory filter models such as gammachirps. Originally motivated as an efficient machine implementation of auditory filtering related to the WKB analysis method of cochlear wave propagation, such filter cascades also provide good fits to mechanical basilar membrane data, and to auditory nerve data, including linear low-frequency tail response, level-dependent peak gain, sharp tuning curves, nonlinear compression curves, level-independent zero-crossing times in the impulse response, realistic instantaneous frequency glides, and appropriate level-dependent group delay even with minimum-phase response. As part of exploring different level-dependent parameterizations of such filter cascades, we have identified a simple sufficient condition for stable zero-crossing times, based on the shifting property of the Laplace transform: simply move all the s-domain poles and zeros by equal amounts in the real-s direction. Such pole-zero filter cascades are efficient front ends for machine hearing applications, such as music information retrieval, content identification, speech recognition, and sound indexing.

  8. Environmental monitoring: data trending using a frequency model.

    PubMed

    Caputo, Ross A; Huffman, Anne

    2004-01-01

    Environmental monitoring programs for the oversight of classified environments have used traditional statistical control charts to monitor trends in microbial recovery for classified environments. These methodologies work well for environments that yield measurable microbial recoveries. However, today successful increased control of microbial content yields numerous instances where microbial recovery in a sample is generally zero. As a result, traditional control chart methods cannot be used appropriately. Two methods to monitor the performance of a classified environment where microbial recovery is zero are presented. Both methods use the frequency between non-zero microbial recovery as an event. Therefore, the frequency of events is monitored rather than the microbial recovery count. Both methods are shown to be appropriate for use in the described instances.

  9. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  10. Item Response Modeling of Multivariate Count Data with Zero Inflation, Maximum Inflation, and Heaping

    ERIC Educational Resources Information Center

    Magnus, Brooke E.; Thissen, David

    2017-01-01

    Questionnaires that include items eliciting count responses are becoming increasingly common in psychology. This study proposes methodological techniques to overcome some of the challenges associated with analyzing multivariate item response data that exhibit zero inflation, maximum inflation, and heaping at preferred digits. The modeling…

  11. The Effects of Intrinsic Noise on an Inhomogeneous Lattice of Chemical Oscillators

    NASA Astrophysics Data System (ADS)

    Giver, Michael; Jabeen, Zahera; Chakraborty, Bulbul

    2012-02-01

    Intrinsic or demographic noise has been shown to play an important role in the dynamics of a variety of systems including biochemical reactions within cells, predator-prey populations, and oscillatory chemical reaction systems, and is known to give rise to oscillations and pattern formation well outside the parameter range predicted by standard mean-field analysis. Motivated by an experimental model of cells and tissues where the cells are represented by chemical reagents isolated in emulsion droplets, we study the stochastic Brusselator, a simple activator-inhibitor chemical reaction model. Our work extends the results of recent studies on the zero and one dimensional system to the case of a non-uniform one dimensional lattice using a combination of analytical techniques and Monte Carlo simulations.

  12. An accurate evaluation of the performance of asynchronous DS-CDMA systems with zero-correlation-zone coding in Rayleigh fading

    NASA Astrophysics Data System (ADS)

    Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.

    2010-04-01

    An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.

  13. 40 CFR 86.1333 - Transient test cycle generation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... zero percent speeds, zero percent torque points, but may be engaged up to two points preceding a non-zero point, and may be engaged for time segments with zero percent speed and torque points of durations...

  14. Spin zero Hawking radiation for non-zero-angular momentum mode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ngampitipan, Tritos; Bonserm, Petarpa; Visser, Matt

    2015-05-15

    Black hole greybody factors carry some quantum black hole information. Studying greybody factors may lead to understanding the quantum nature of black holes. However, solving for exact greybody factors in many black hole systems is impossible. One way to deal with this problem is to place some rigorous analytic bounds on the greybody factors. In this paper, we calculate rigorous bounds on the greybody factors for spin zero hawking radiation for non-zero-angular momentum mode from the Kerr-Newman black holes.

  15. Solving moment hierarchies for chemical reaction networks

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, Supriya; Smith, Eric

    2017-10-01

    The study of chemical reaction networks (CRN’s) is a very active field. Earlier well-known results (Feinberg 1987 Chem. Enc. Sci. 42 2229, Anderson et al 2010 Bull. Math. Biol. 72 1947) identify a topological quantity called deficiency, for any CRN, which, when exactly equal to zero, leads to a unique factorized steady-state for these networks. No results exist however for the steady states of non-zero-deficiency networks. In this paper, we show how to write the full moment-hierarchy for any non-zero-deficiency CRN obeying mass-action kinetics, in terms of equations for the factorial moments. Using these, we can recursively predict values for lower moments from higher moments, reversing the procedure usually used to solve moment hierarchies. We show, for non-trivial examples, that in this manner we can predict any moment of interest, for CRN’s with non-zero deficiency and non-factorizable steady states.

  16. Existence of standard models of conic fibrations over non-algebraically-closed fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avilov, A A

    2014-12-31

    We prove an analogue of Sarkisov's theorem on the existence of a standard model of a conic fibration over an algebraically closed field of characteristic different from two for three-dimensional conic fibrations over an arbitrary field of characteristic zero with an action of a finite group. Bibliography: 16 titles.

  17. Can compactifications solve the cosmological constant problem?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertzberg, Mark P.; Center for Theoretical Physics, Department of Physics,Massachusetts Institute of Technology,77 Massachusetts Ave, Cambridge, MA 02139; Masoumi, Ali

    2016-06-30

    Recently, there have been claims in the literature that the cosmological constant problem can be dynamically solved by specific compactifications of gravity from higher-dimensional toy models. These models have the novel feature that in the four-dimensional theory, the cosmological constant Λ is much smaller than the Planck density and in fact accumulates at Λ=0. Here we show that while these are very interesting models, they do not properly address the real cosmological constant problem. As we explain, the real problem is not simply to obtain Λ that is small in Planck units in a toy model, but to explain whymore » Λ is much smaller than other mass scales (and combinations of scales) in the theory. Instead, in these toy models, all other particle mass scales have been either removed or sent to zero, thus ignoring the real problem. To this end, we provide a general argument that the included moduli masses are generically of order Hubble, so sending them to zero trivially sends the cosmological constant to zero. We also show that the fundamental Planck mass is being sent to zero, and so the central problem is trivially avoided by removing high energy physics altogether. On the other hand, by including various large mass scales from particle physics with a high fundamental Planck mass, one is faced with a real problem, whose only known solution involves accidental cancellations in a landscape.« less

  18. A review on models for count data with extra zeros

    NASA Astrophysics Data System (ADS)

    Zamri, Nik Sarah Nik; Zamzuri, Zamira Hasanah

    2017-04-01

    Typically, the zero inflated models are usually used in modelling count data with excess zeros. The existence of the extra zeros could be structural zeros or random which occur by chance. These types of data are commonly found in various disciplines such as finance, insurance, biomedical, econometrical, ecology, and health sciences. As found in the literature, the most popular zero inflated models used are zero inflated Poisson and zero inflated negative binomial. Recently, more complex models have been developed to account for overdispersion and unobserved heterogeneity. In addition, more extended distributions are also considered in modelling data with this feature. In this paper, we review related literature, provide a recent development and summary on models for count data with extra zeros.

  19. Measurement of Valley Kondo Effect in a Si/SiGe Quantum Dot

    NASA Astrophysics Data System (ADS)

    Yuan, Mingyun; Yang, Zhen; Tang, Chunyang; Rimberg, A. J.; Joynt, R.; Savage, D. E.; Lagally, M. G.; Eriksson, M. A.

    2013-03-01

    The Kondo effect in Si/SiGe QDs can be enriched by the valley degree of freedom in Si. We have observed resonances showing temperature dependence characteristic of the Kondo effect in two consecutive Coulomb diamonds. These resonances exhibit unusual magnetic field dependence that we interpret as arising from Kondo screening of the valley degree of freedom. In one diamond two Kondo peaks due to screening of the valley index exist at zero magnetic field, revealing a zero-field valley splitting of Δ ~ 0.28 meV. In a non-zero magnetic field the peaks broaden and coalesce due to Zeeman splitting. In the other diamond, a single resonance at zero bias persists without Zeeman splitting for non-zero magnetic field, a phenomenon characteristic of valley non-conservation in tunneling. This research is supported by the NSA and ARO.

  20. Analytical study of mode degeneracy in non-Hermitian photonic crystals with TM-like polarization

    NASA Astrophysics Data System (ADS)

    Yin, Xuefan; Liang, Yong; Ni, Liangfu; Wang, Zhixin; Peng, Chao; Li, Zhengbin

    2017-08-01

    We present a study of the mode degeneracy in non-Hermitian photonic crystals (PC) with TM-like polarization and C4 v symmetry from the perspective of the coupled-wave theory (CWT). The CWT framework is extended to include TE-TM coupling terms which are critical for modeling the accidental triple degeneracy within non-Hermitian PC systems. We derive the analytical form of the wave function and the condition of Dirac-like-cone dispersion when radiation loss is relatively small. We find that, similar to a real Dirac cone, the Dirac-like cone in non-Hermitian PCs possesses good linearity and isotropy, even with a ring of exceptional points (EPs) inevitably existing in the vicinity of the second-order Γ point. However, the Berry phase remains zero at the Γ point, indicating the cone does not obey the Dirac equation and is only a Dirac-like cone. The topological modal interchange phenomenon and nonzero Berry phase of the EPs are also discussed.

  1. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller.

    PubMed

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance--competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  2. Integrating dynamic stopping, transfer learning and language models in an adaptive zero-training ERP speller

    NASA Astrophysics Data System (ADS)

    Kindermans, Pieter-Jan; Tangermann, Michael; Müller, Klaus-Robert; Schrauwen, Benjamin

    2014-06-01

    Objective. Most BCIs have to undergo a calibration session in which data is recorded to train decoders with machine learning. Only recently zero-training methods have become a subject of study. This work proposes a probabilistic framework for BCI applications which exploit event-related potentials (ERPs). For the example of a visual P300 speller we show how the framework harvests the structure suitable to solve the decoding task by (a) transfer learning, (b) unsupervised adaptation, (c) language model and (d) dynamic stopping. Approach. A simulation study compares the proposed probabilistic zero framework (using transfer learning and task structure) to a state-of-the-art supervised model on n = 22 subjects. The individual influence of the involved components (a)-(d) are investigated. Main results. Without any need for a calibration session, the probabilistic zero-training framework with inter-subject transfer learning shows excellent performance—competitive to a state-of-the-art supervised method using calibration. Its decoding quality is carried mainly by the effect of transfer learning in combination with continuous unsupervised adaptation. Significance. A high-performing zero-training BCI is within reach for one of the most popular BCI paradigms: ERP spelling. Recording calibration data for a supervised BCI would require valuable time which is lost for spelling. The time spent on calibration would allow a novel user to spell 29 symbols with our unsupervised approach. It could be of use for various clinical and non-clinical ERP-applications of BCI.

  3. Cosmological parameter estimation from CMB and X-ray cluster after Planck

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jian-Wei; Cai, Rong-Gen; Guo, Zong-Kuan

    We investigate constraints on cosmological parameters in three 8-parameter models with the summed neutrino mass as a free parameter, by a joint analysis of CCCP X-ray cluster data, the newly released Planck CMB data as well as some external data sets including baryon acoustic oscillation measurements from the 6dFGS, SDSS DR7 and BOSS DR9 surveys, and Hubble Space Telescope H{sub 0} measurement. We find that the combined data strongly favor a non-zero neutrino masses at more than 3σ confidence level in these non-vanilla models. Allowing the CMB lensing amplitude A{sub L} to vary, we find A{sub L} > 1 atmore » 3σ confidence level. For dark energy with a constant equation of state w, we obtain w < −1 at 3σ confidence level. The estimate of the matter power spectrum amplitude σ{sub 8} is discrepant with the Planck value at 2σ confidence level, which reflects some tension between X-ray cluster data and Planck data in these non-vanilla models. The tension can be alleviated by adding a 9% systematic shift in the cluster mass function.« less

  4. Application of Parametrized Post-Newtonian Methods to the Gravitational IS of Satellite Energy Exchange Data

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.

    1998-01-01

    Project Satellite Energy Exchange (SEE) is a free-flying, high altitude satellite that utilizes space to construct a passive, low-temperature, nano-g environment in order to accurately measure the poorly known gravitational constant G plus other gravitational parameters that are difficult to measure in an earth-based laboratory. Eventually data received from SEE must be analyzed using a model of the gravitational interaction including parameters that describe deviations from general relativity and experiment. One model that can be used to fit tile data is the Parametrized post- Newtonian (PPN) approximation of general relativity (GR) which introduces ten parameters which have specified values in (GR). It is the lowest-order, consistent approximation that contains non linear terms. General relativity predicts that the Robertson parameters, gamma (light deflection), and beta (advance of the perihelion), are both 1 in GR. Another eight parameters, alpha(sub k), k=1,2,3 and zeta(sub k), k=1,2,3,4 and Xi are all zero in GR. Non zero values for alpha(sub k) parameters predict preferred frame effects; for zeta(sub k) violations of globally conserved quantities such as mass, momentum and angular momentum; and for Xi a contribution from the Whitehead theory of gravitation, once thought to be equivalent to GR. In addition, there is the possibility that there may be a preferred frame for the universe. If such a frame exists, then all observers must measure the velocity omega of their motion with respect to this universal rest frame. Such a frame is somewhat reminiscent of the concept of the ether which was supposedly the frame in which the velocity of light took the value c predicted by special relativity. The SEE mission can also look for deviations from the r(exp -2) law of Newtonian gravity, adding parameters alpha and lamda for non Newtonian behavior that describe the magnitude and range of the r(exp -2) deviations respectively. The foundations of the GR supposedly agree with Newtonian gravity to first order so that the parameters alpha and lamda are zero in GR. More important, however, GR subsequently depends on this Newtonian approximation to build up the non linear higher-order terms which forms the basis of the PPN frame work.

  5. Electroweak phase transition and entropy release in the early universe

    NASA Astrophysics Data System (ADS)

    Chaudhuri, A.; Dolgov, A.

    2018-01-01

    It is shown that the vacuum-like energy of the Higgs potential at non-zero temperatures leads, in the course of the cosmological expansion, to a small but non-negligible rise of the entropy density in the comoving volume. This increase is calculated in the frameworks of the minimal standard model. The result can have a noticeable effect on the outcome of baryo-through-leptogenesis.

  6. Compilation of basal metabolic and blood perfusion rates in various multi-compartment, whole-body thermoregulation models

    NASA Astrophysics Data System (ADS)

    Shitzer, Avraham; Arens, Edward; Zhang, Hui

    2016-07-01

    The assignments of basal metabolic rates (BMR), basal cardiac output (BCO), and basal blood perfusion rates (BBPR) were compared in nine multi-compartment, whole-body thermoregulation models. The data are presented at three levels of detail: total body, specific body regions, and regional body tissue layers. Differences in the assignment of these quantities among the compared models increased with the level of detail, in the above order. The ranges of variability in the total body BMR was 6.5 % relative to the lowest value, with a mean of 84.3 ± 2 W, and in the BCO, it was 8 % with a mean of 4.70 ± 0.13 l/min. The least variability among the body regions is seen in the combined torso (shoulders, thorax, and abdomen: ±7.8 % BMR and ±5.9 % BBPR) and in the combined head (head, face, and neck ±9.9 % BMR and ±10.9 % BBPR), determined by the ratio of the standard deviation to the mean. Much more variability is apparent in the extremities with the most showing in the BMR of the feet (±117 %), followed by the BBPR in the arms (±61.3 %). In the tissue layers, most of the bone layers were assigned zero BMR and BBPR, except in the shoulders and in the extremities that were assigned non-zero values in a number of models. The next lowest values were assigned to the fat layers, with occasional zero values. Skin basal values were invariably non-zero but involved very low values in certain models, e.g., BBPR in the feet and the hands. Muscle layers were invariably assigned high values with the highest found in the thorax, abdomen, and legs. The brain, lung, and viscera layers were assigned the highest of all values of both basal quantities with those of the brain layers showing rather tight ranges of variability in both basal quantities. Average basal values of the "time-seasoned" models presented in this study could be useful as a first step in future modeling efforts subject to appropriate adjustment of values to conform to most recently available and reliable data.

  7. On the role of radiation and dimensionality in predicting flow opposed flame spread over thin fuels

    NASA Astrophysics Data System (ADS)

    Kumar, Chenthil; Kumar, Amit

    2012-06-01

    In this work a flame-spread model is formulated in three dimensions to simulate opposed flow flame spread over thin solid fuels. The flame-spread model is coupled to a three-dimensional gas radiation model. The experiments [1] on downward spread and zero gravity quiescent spread over finite width thin fuel are simulated by flame-spread models in both two and three dimensions to assess the role of radiation and effect of dimensionality on the prediction of the flame-spread phenomena. It is observed that while radiation plays only a minor role in normal gravity downward spread, in zero gravity quiescent spread surface radiation loss holds the key to correct prediction of low oxygen flame spread rate and quenching limit. The present three-dimensional simulations show that even in zero gravity gas radiation affects flame spread rate only moderately (as much as 20% at 100% oxygen) as the heat feedback effect exceeds the radiation loss effect only moderately. However, the two-dimensional model with the gas radiation model badly over-predicts the zero gravity flame spread rate due to under estimation of gas radiation loss to the ambient surrounding. The two-dimensional model was also found to be inadequate for predicting the zero gravity flame attributes, like the flame length and the flame width, correctly. The need for a three-dimensional model was found to be indispensable for consistently describing the zero gravity flame-spread experiments [1] (including flame spread rate and flame size) especially at high oxygen levels (>30%). On the other hand it was observed that for the normal gravity downward flame spread for oxygen levels up to 60%, the two-dimensional model was sufficient to predict flame spread rate and flame size reasonably well. Gas radiation is seen to increase the three-dimensional effect especially at elevated oxygen levels (>30% for zero gravity and >60% for normal gravity flames).

  8. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  9. Rotating gravity currents. Part 1. Energy loss theory

    NASA Astrophysics Data System (ADS)

    Martin, J. R.; Lane-Serff, G. F.

    2005-01-01

    A comprehensive energy loss theory for gravity currents in rotating rectangular channels is presented. The model is an extension of the non-rotating energy loss theory of Benjamin (J. Fluid Mech. vol. 31, 1968, p. 209) and the steady-state dissipationless theory of rotating gravity currents of Hacker (PhD thesis, 1996). The theory assumes the fluid is inviscid, there is no shear within the current, and the Boussinesq approximation is made. Dissipation is introduced using a simple method. A head loss term is introduced into the Bernoulli equation and it is assumed that the energy loss is uniform across the stream. Conservation of momentum, volume flux and potential vorticity between upstream and downstream locations is then considered. By allowing for energy dissipation, results are obtained for channels of arbitrary depth and width (relative to the current). The results match those from earlier workers in the two limits of (i) zero rotation (but including dissipation) and (ii) zero dissipation (but including rotation). Three types of flow are identified as the effect of rotation increases, characterized in terms of the location of the outcropping interface between the gravity current and the ambient fluid on the channel boundaries. The parameters for transitions between these cases are quantified, as is the detailed behaviour of the flow in all cases. In particular, the speed of the current can be predicted for any given channel depth and width. As the channel depth increases, the predicted Froude number tends to surd 2, as for non-rotating flows.

  10. Improving accuracy of electrochemical capacitance and solvation energetics in first-principles calculations

    NASA Astrophysics Data System (ADS)

    Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.

    2018-04-01

    Reliable first-principles calculations of electrochemical processes require accurate prediction of the interfacial capacitance, a challenge for current computationally efficient continuum solvation methodologies. We develop a model for the double layer of a metallic electrode that reproduces the features of the experimental capacitance of Ag(100) in a non-adsorbing, aqueous electrolyte, including a broad hump in the capacitance near the potential of zero charge and a dip in the capacitance under conditions of low ionic strength. Using this model, we identify the necessary characteristics of a solvation model suitable for first-principles electrochemistry of metal surfaces in non-adsorbing, aqueous electrolytes: dielectric and ionic nonlinearity, and a dielectric-only region at the interface. The dielectric nonlinearity, caused by the saturation of dipole rotational response in water, creates the capacitance hump, while ionic nonlinearity, caused by the compactness of the diffuse layer, generates the capacitance dip seen at low ionic strength. We show that none of the previously developed solvation models simultaneously meet all these criteria. We design the nonlinear electrochemical soft-sphere solvation model which both captures the capacitance features observed experimentally and serves as a general-purpose continuum solvation model.

  11. Coupled three-layer model for turbulent flow over large-scale roughness: On the hydrodynamics of boulder-bed streams

    NASA Astrophysics Data System (ADS)

    Pan, Wen-hao; Liu, Shi-he; Huang, Li

    2018-02-01

    This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.

  12. A non-invasive diffuse reflectance calibration-free method for absolute determination of exogenous biochemicals concentration in biological tissues

    NASA Astrophysics Data System (ADS)

    Lappa, Alexander V.; Kulikovskiy, Artem N.; Busarov, Oleg G.

    2014-03-01

    The paper presents a new method for distant non-destructive determination of concentration of light absorbing admixtures in turbid media. In particular, it is intended for non-invasive in vivo control of accumulation in patient tissues of various biochemicals introduced to the patients for chemotherapy, photodynamic therapy or diagnostics. It is require that the admixture absorption spectrum should have a clearly marked peak in the wavelength region where the pure medium one varies regularly. Fluorescence of admixtures is not required. The method uses the local diffuse reflectance spectroscopy with optical fiber probe including one emitting and two reading There are several features in the method: the value to be determined is absolute concentration of admixtures; the method needs no calibration measurements on phantoms; it needs no reference measurements on sample with zero admixture concentration; it uses a two parametric kinetic light propagation model and original algorithms to resolve direct and inverse tasks of radiation transport theory. Experimental testing passed with tissue equivalent phantoms and different admixtures, including a chlorine photosensitizer, showed accuracy under 10% in all cases.

  13. Comparing Exponential and Exponentiated Models of Drug Demand in Cocaine Users

    PubMed Central

    Strickland, Justin C.; Lile, Joshua A.; Rush, Craig R.; Stoops, William W.

    2016-01-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model, but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use), whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values impact demand parameters and their association with drug-use outcomes when using the exponential model, but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency, in addition to demonstrating construct validity and generalizability. PMID:27929347

  14. Comparing exponential and exponentiated models of drug demand in cocaine users.

    PubMed

    Strickland, Justin C; Lile, Joshua A; Rush, Craig R; Stoops, William W

    2016-12-01

    Drug purchase tasks provide rapid and efficient measurement of drug demand. Zero values (i.e., prices with zero consumption) present a quantitative challenge when using exponential demand models that exponentiated models may resolve. We aimed to replicate and advance the utility of using an exponentiated model by demonstrating construct validity (i.e., association with real-world drug use) and generalizability across drug commodities. Participants (N = 40 cocaine-using adults) completed Cocaine, Alcohol, and Cigarette Purchase Tasks evaluating hypothetical consumption across changes in price. Exponentiated and exponential models were fit to these data using different treatments of zero consumption values, including retaining zeros or replacing them with 0.1, 0.01, or 0.001. Excellent model fits were observed with the exponentiated model. Means and precision fluctuated with different replacement values when using the exponential model but were consistent for the exponentiated model. The exponentiated model provided the strongest correlation between derived demand intensity (Q0) and self-reported free consumption in all instances (Cocaine r = .88; Alcohol r = .97; Cigarette r = .91). Cocaine demand elasticity was positively correlated with alcohol and cigarette elasticity. Exponentiated parameters were associated with real-world drug use (e.g., weekly cocaine use) whereas these correlations were less consistent for exponential parameters. Our findings show that selection of zero replacement values affects demand parameters and their association with drug-use outcomes when using the exponential model but not the exponentiated model. This work supports the adoption of the exponentiated demand model by replicating improved fit and consistency and demonstrating construct validity and generalizability. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Assessment of zero-equation SGS models for simulating indoor environment

    NASA Astrophysics Data System (ADS)

    Taghinia, Javad; Rahman, Md Mizanur; Tse, Tim K. T.

    2016-12-01

    The understanding of air-flow in enclosed spaces plays a key role to designing ventilation systems and indoor environment. The computational fluid dynamics aspects dictate that the large eddy simulation (LES) offers a subtle means to analyze complex flows with recirculation and streamline curvature effects, providing more robust and accurate details than those of Reynolds-averaged Navier-Stokes simulations. This work assesses the performance of two zero-equation sub-grid scale models: the Rahman-Agarwal-Siikonen-Taghinia (RAST) model with a single grid-filter and the dynamic Smagorinsky model with grid-filter and test-filter scales. This in turn allows a cross-comparison of the effect of two different LES methods in simulating indoor air-flows with forced and mixed (natural + forced) convection. A better performance against experiments is indicated with the RAST model in wall-bounded non-equilibrium indoor air-flows; this is due to its sensitivity toward both the shear and vorticity parameters.

  16. Theoretical constraints in the design of multivariable control systems

    NASA Technical Reports Server (NTRS)

    Rynaski, E. G.; Mook, D. J.

    1993-01-01

    The theoretical constraints inherent in the design of multivariable control systems were defined and investigated. These constraints are manifested by the system transmission zeros that limit or bound the areas in which closed loop poles and individual transfer function zeros may be placed. These constraints were investigated primarily in the context of system decoupling or non-interaction. It was proven that decoupling requires the placement of closed loop poles at the system transmission zeros. Therefore, the system transmission zeros must be minimum phase to guarantee a stable decoupled system. Once decoupling has been accomplished, the remaining part of the system exhibits transmission zeros at infinity, so nearly complete design freedom is possible in terms of placing both poles and zeros of individual closed loop transfer functions. A general, dynamic inversion model following system architecture was developed that encompasses both the implicit and explicit configuration. Robustness properties are developed along with other attributes of this type of system. Finally, a direct design is developed for the longitudinal-vertical degrees of freedom of aircraft motion to show how a direct lift flap can be used to improve the pitch-heave maneuvering coordination for enhanced flying qualities.

  17. 40 CFR 60.2770 - What information must I include in my annual report?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule... inoperative, except for zero (low-level) and high-level checks. (3) The date, time, and duration that each... of control if any of the following occur. (1) The zero (low-level), mid-level (if applicable), or...

  18. Manifestations of Dynamical Localization in the Disordered XXZ Spin Chain

    NASA Astrophysics Data System (ADS)

    Elgart, Alexander; Klein, Abel; Stolz, Günter

    2018-04-01

    We study disordered XXZ spin chains in the Ising phase exhibiting droplet localization, a single cluster localization property we previously proved for random XXZ spin chains. It holds in an energy interval I near the bottom of the spectrum, known as the droplet spectrum. We establish dynamical manifestations of localization in the energy window I, including non-spreading of information, zero-velocity Lieb-Robinson bounds, and general dynamical clustering. Our results do not rely on knowledge of the dynamical characteristics of the model outside the droplet spectrum. A byproduct of our analysis is that for random XXZ spin chains this droplet localization can happen only inside the droplet spectrum.

  19. Modeling Active Contraction and Relaxation of Left Ventricle Using Different Zero-load Diastole and Systole Geometries for Better Material Parameter Estimation and Stress/Strain Calculations

    PubMed Central

    Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin

    2018-01-01

    Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young’s moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YMf) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YMf was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YMf was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations. PMID:29399004

  20. Modeling Active Contraction and Relaxation of Left Ventricle Using Different Zero-load Diastole and Systole Geometries for Better Material Parameter Estimation and Stress/Strain Calculations.

    PubMed

    Fan, Longling; Yao, Jing; Yang, Chun; Xu, Di; Tang, Dalin

    2016-01-01

    Modeling ventricle active contraction based on in vivo data is extremely challenging because of complex ventricle geometry, dynamic heart motion and active contraction where the reference geometry (zero-stress geometry) changes constantly. A new modeling approach using different diastole and systole zero-load geometries was introduced to handle the changing zero-load geometries for more accurate stress/strain calculations. Echo image data were acquired from 5 patients with infarction (Infarct Group) and 10 without (Non-Infarcted Group). Echo-based computational two-layer left ventricle models using one zero-load geometry (1G) and two zero-load geometries (2G) were constructed. Material parameter values in Mooney-Rivlin models were adjusted to match echo volume data. Effective Young's moduli (YM) were calculated for easy comparison. For diastole phase, begin-filling (BF) mean YM value in the fiber direction (YM f ) was 738% higher than its end-diastole (ED) value (645.39 kPa vs. 76.97 kPa, p=3.38E-06). For systole phase, end-systole (ES) YM f was 903% higher than its begin-ejection (BE) value (1025.10 kPa vs. 102.11 kPa, p=6.10E-05). Comparing systolic and diastolic material properties, ES YM f was 59% higher than its BF value (1025.10 kPa vs. 645.39 kPa. p=0.0002). BE mean stress value was 514% higher than its ED value (299.69 kPa vs. 48.81 kPa, p=3.39E-06), while BE mean strain value was 31.5% higher than its ED value (0.9417 vs. 0.7162, p=0.004). Similarly, ES mean stress value was 562% higher than its BF value (19.74 kPa vs. 2.98 kPa, p=6.22E-05), and ES mean strain value was 264% higher than its BF value (0.1985 vs. 0.0546, p=3.42E-06). 2G models improved over 1G model limitations and may provide better material parameter estimation and stress/strain calculations.

  1. Nonlinear convective analysis of a rotating Oldroyd-B nanofluid layer under thermal non-equilibrium utilizing Al2O3-EG colloidal suspension

    NASA Astrophysics Data System (ADS)

    Agarwal, Shilpi; Rana, Puneet

    2016-04-01

    In this paper, we examine a layer of Oldroyd-B nanofluid for linear and nonlinear regimes under local thermal non-equilibrium conditions for the classical Rayleigh-Bénard problem. The free-free boundary condition has been implemented with the flux for nanoparticle concentration being zero at edges. The Oberbeck-Boussinesq approximation holds good and for the rotational effect Coriolis term is included in the momentum equation. A two-temperature model explains the effect of local thermal non-equilibrium among the particle and fluid phases. The criteria for onset of stationary convection has been derived as a function of the non-dimensionalized parameters involved including the Taylor number. The assumed boundary conditions negate the possibility of overstability due to the absence of opposing forces responsible for it. The thermal Nusselt number has been obtained utilizing a weak nonlinear theory in terms of various pertinent parameters in the steady and transient mode, and has been depicted graphically. The main findings signify that the rotation has a stabilizing effect on the system. The stress relaxation parameter λ_1 inhibits whereas the strain retardation parameter λ_2 exhibits heat transfer utilizing Al2O3 nanofluids.

  2. Non-equilibrium STLS approach to transport properties of single impurity Anderson model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezai, Raheleh, E-mail: R_Rezai@sbu.ac.ir; Ebrahimi, Farshad, E-mail: Ebrahimi@sbu.ac.ir

    In this work, using the non-equilibrium Keldysh formalism, we study the effects of the electron–electron interaction and the electron-spin correlation on the non-equilibrium Kondo effect and the transport properties of the symmetric single impurity Anderson model (SIAM) at zero temperature by generalizing the self-consistent method of Singwi, Tosi, Land, and Sjolander (STLS) for a single-band tight-binding model with Hubbard type interaction to out of equilibrium steady-states. We at first determine in a self-consistent manner the non-equilibrium spin correlation function, the effective Hubbard interaction, and the double-occupancy at the impurity site. Then, using the non-equilibrium STLS spin polarization function in themore » non-equilibrium formalism of the iterative perturbation theory (IPT) of Yosida and Yamada, and Horvatic and Zlatic, we compute the spectral density, the current–voltage characteristics and the differential conductance as functions of the applied bias and the strength of on-site Hubbard interaction. We compare our spectral densities at zero bias with the results of numerical renormalization group (NRG) and depict the effects of the electron–electron interaction and electron-spin correlation at the impurity site on the aforementioned properties by comparing our numerical result with the order U{sup 2} IPT. Finally, we show that the obtained numerical results on the differential conductance have a quadratic universal scaling behavior and the resulting Kondo temperature shows an exponential behavior. -- Highlights: •We introduce for the first time the non-equilibrium method of STLS for Hubbard type models. •We determine the transport properties of SIAM using the non-equilibrium STLS method. •We compare our results with order-U2 IPT and NRG. •We show that non-equilibrium STLS, contrary to the GW and self-consistent RPA, produces the two Hubbard peaks in DOS. •We show that the method keeps the universal scaling behavior and correct exponential behavior of Kondo temperature.« less

  3. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  4. Remote sensing of earth terrain

    NASA Technical Reports Server (NTRS)

    Kong, J. A.

    1988-01-01

    Two monographs and 85 journal and conference papers on remote sensing of earth terrain have been published, sponsored by NASA Contract NAG5-270. A multivariate K-distribution is proposed to model the statistics of fully polarimetric data from earth terrain with polarizations HH, HV, VH, and VV. In this approach, correlated polarizations of radar signals, as characterized by a covariance matrix, are treated as the sum of N n-dimensional random vectors; N obeys the negative binomial distribution with a parameter alpha and mean bar N. Subsequently, and n-dimensional K-distribution, with either zero or non-zero mean, is developed in the limit of infinite bar N or illuminated area. The probability density function (PDF) of the K-distributed vector normalized by its Euclidean norm is independent of the parameter alpha and is the same as that derived from a zero-mean Gaussian-distributed random vector. The above model is well supported by experimental data provided by MIT Lincoln Laboratory and the Jet Propulsion Laboratory in the form of polarimetric measurements.

  5. The power and robustness of maximum LOD score statistics.

    PubMed

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  6. Pharmacoepidemiologic investigation of a clonazepam-valproic acid interaction by mixed effect modeling using routine clinical pharmacokinetic data in Japanese patients.

    PubMed

    Yukawa, E; Nonaka, T; Yukawa, M; Higuchi, S; Kuroda, T; Goto, Y

    2003-12-01

    Non-linear Mixed Effects Modeling (NONMEM) was used to estimate the effects of clonazepam-valproic acid interaction on clearance values using 576 serum levels collected from 317 pediatric and adult epileptic patients (age range, 0.3-32.6 years) during their clinical routine care. Patients received the administration of clonazepam and/or valproic acid. The final model describing clonazepam clearance was CL = 144.0 TBW-0.172 1.14VPA, where CL is total body clearance (mL/kg/h); TBW is total body weight (kg); VPA = 1 for concomitant administration of valproic acid and VPA = zero otherwise. The final model describing valproic acid clearance was CL (mL/kg/h) = 17.2 TBW-0.264 DOSE0.159 0.821CZP 0.896GEN, where DOSE is the daily dose of valproic acid (mg/kg/day); CZP = 1 for concomitant administration of clonazepam and CZP = zero otherwise; GEN = 1 for female and GEN = zero otherwise. Concomitant administration of clonazepam and valproic acid resulted in a 14% increase in clonazepam clearance, and a 17.9% decrease in valproic acid clearance.

  7. Cumulative sum control charts for monitoring geometrically inflated Poisson processes: An application to infectious disease counts data.

    PubMed

    Rakitzis, Athanasios C; Castagliola, Philippe; Maravelakis, Petros E

    2018-02-01

    In this work, we study upper-sided cumulative sum control charts that are suitable for monitoring geometrically inflated Poisson processes. We assume that a process is properly described by a two-parameter extension of the zero-inflated Poisson distribution, which can be used for modeling count data with an excessive number of zero and non-zero values. Two different upper-sided cumulative sum-type schemes are considered, both suitable for the detection of increasing shifts in the average of the process. Aspects of their statistical design are discussed and their performance is compared under various out-of-control situations. Changes in both parameters of the process are considered. Finally, the monitoring of the monthly cases of poliomyelitis in the USA is given as an illustrative example.

  8. Degenerate limit thermodynamics beyond leading order for models of dense matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantinou, Constantinos, E-mail: c.constantinou@fz-juelich.de; Muccioli, Brian, E-mail: bm956810@ohio.edu; Prakash, Madappa, E-mail: prakash@ohio.edu

    2015-12-15

    Analytical formulas for next-to-leading order temperature corrections to the thermal state variables of interacting nucleons in bulk matter are derived in the degenerate limit. The formalism developed is applicable to a wide class of non-relativistic and relativistic models of hot and dense matter currently used in nuclear physics and astrophysics (supernovae, proto-neutron stars and neutron star mergers) as well as in condensed matter physics. We consider the general case of arbitrary dimensionality of momentum space and an arbitrary degree of relativity (for relativistic models). For non-relativistic zero-range interactions, knowledge of the Landau effective mass suffices to compute next-to-leading order effects,more » but for finite-range interactions, momentum derivatives of the Landau effective mass function up to second order are required. Results from our analytical formulas are compared with the exact results for zero- and finite-range potential and relativistic mean-field theoretical models. In all cases, inclusion of next-to-leading order temperature effects substantially extends the ranges of partial degeneracy for which the analytical treatment remains valid. Effects of many-body correlations that deserve further investigation are highlighted.« less

  9. Characterization and Modeling of Thoraco-Abdominal Response to Blast Waves. Volume 4. Biomechanical Model of Thorax Response to Blast Loading

    DTIC Science & Technology

    1985-05-01

    non- zero Dirichlet boundary conditions and/or general mixed type boundary conditions. Note that Neumann type boundary condi- tion enters the problem by...Background ................................. ................... I 1.3 General Description ..... ............ ........... . ....... ...... 2 2. ANATOMICAL...human and varions loading conditions for the definition of a generalized safety guideline of blast exposure. To model the response of a sheep torso

  10. Fluids, superfluids and supersolids: dynamics and cosmology of self-gravitating media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celoria, Marco; Comelli, Denis; Pilo, Luigi, E-mail: marco.celoria@gssi.infn.it, E-mail: comelli@fe.infn.it, E-mail: luigi.pilo@aquila.infn.it

    We compute cosmological perturbations for a generic self-gravitating media described by four derivatively-coupled scalar fields. Depending on the internal symmetries of the action for the scalar fields, one can describe perfect fluids, superfluids, solids and supersolids media. Symmetries dictate both dynamical and thermodynamical properties of the media. Generically, scalar perturbations include, besides the gravitational potential, an additional non-adiabatic mode associated with the entropy per particle σ. While perfect fluids and solids are adiabatic with σ constant in time, superfluids and supersolids feature a non-trivial dynamics for σ. Special classes of isentropic media with zero σ can also be found. Tensormore » modes become massive for solids and supersolids. Such an effective approach can be used to give a very general and symmetry driven modelling of the dark sector.« less

  11. Small area estimation (SAE) model: Case study of poverty in West Java Province

    NASA Astrophysics Data System (ADS)

    Suhartini, Titin; Sadik, Kusman; Indahwati

    2016-02-01

    This paper showed the comparative of direct estimation and indirect/Small Area Estimation (SAE) model. Model selection included resolve multicollinearity problem in auxiliary variable, such as choosing only variable non-multicollinearity and implemented principal component (PC). Concern parameters in this paper were the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The approach for estimating these parameters could be performed based on direct estimation and SAE. The problem of direct estimation, three area even zero and could not be conducted by directly estimation, because small sample size. The proportion of agricultural venture poor households showed 19.22% and agricultural poor households showed 46.79%. The best model from agricultural venture poor households by choosing only variable non-multicollinearity and the best model from agricultural poor households by implemented PC. The best estimator showed SAE better then direct estimation both of the proportion of agricultural venture poor households and agricultural poor households area level in West Java Province. The solution overcame small sample size and obtained estimation for small area was implemented small area estimation method for evidence higher accuracy and better precision improved direct estimator.

  12. Phase transition and field effect topological quantum transistor made of monolayer MoS2

    NASA Astrophysics Data System (ADS)

    Simchi, H.; Simchi, M.; Fardmanesh, M.; Peeters, F. M.

    2018-06-01

    We study topological phase transitions and topological quantum field effect transistor in monolayer molybdenum disulfide (MoS2) using a two-band Hamiltonian model. Without considering the quadratic (q 2) diagonal term in the Hamiltonian, we show that the phase diagram includes quantum anomalous Hall effect, quantum spin Hall effect, and spin quantum anomalous Hall effect regions such that the topological Kirchhoff law is satisfied in the plane. By considering the q 2 diagonal term and including one valley, it is shown that MoS2 has a non-trivial topology, and the valley Chern number is non-zero for each spin. We show that the wave function is (is not) localized at the edges when the q 2 diagonal term is added (deleted) to (from) the spin-valley Dirac mass equation. We calculate the quantum conductance of zigzag MoS2 nanoribbons by using the nonequilibrium Green function method and show how this device works as a field effect topological quantum transistor.

  13. Feature Selection Methods for Zero-Shot Learning of Neural Activity.

    PubMed

    Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  14. What does my patient's coronary artery calcium score mean? Combining information from the coronary artery calcium score with information from conventional risk factors to estimate coronary heart disease risk

    PubMed Central

    Pletcher, Mark J; Tice, Jeffrey A; Pignone, Michael; McCulloch, Charles; Callister, Tracy Q; Browner, Warren S

    2004-01-01

    Background The coronary artery calcium (CAC) score is an independent predictor of coronary heart disease. We sought to combine information from the CAC score with information from conventional cardiac risk factors to produce post-test risk estimates, and to determine whether the score may add clinically useful information. Methods We measured the independent cross-sectional associations between conventional cardiac risk factors and the CAC score among asymptomatic persons referred for non-contrast electron beam computed tomography. Using the resulting multivariable models and published CAC score-specific relative risk estimates, we estimated post-test coronary heart disease risk in a number of different scenarios. Results Among 9341 asymptomatic study participants (age 35–88 years, 40% female), we found that conventional coronary heart disease risk factors including age, male sex, self-reported hypertension, diabetes and high cholesterol were independent predictors of the CAC score, and we used the resulting multivariable models for predicting post-test risk in a variety of scenarios. Our models predicted, for example, that a 60-year-old non-smoking non-diabetic women with hypertension and high cholesterol would have a 47% chance of having a CAC score of zero, reducing her 10-year risk estimate from 15% (per Framingham) to 6–9%; if her score were over 100, however (a 17% chance), her risk estimate would be markedly higher (25–51% in 10 years). In low risk scenarios, the CAC score is very likely to be zero or low, and unlikely to change management. Conclusion Combining information from the CAC score with information from conventional risk factors can change assessment of coronary heart disease risk to an extent that may be clinically important, especially when the pre-test 10-year risk estimate is intermediate. The attached spreadsheet makes these calculations easy. PMID:15327691

  15. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Droplet localization in the random XXZ model and its manifestations

    NASA Astrophysics Data System (ADS)

    Elgart, A.; Klein, A.; Stolz, G.

    2018-01-01

    We examine many-body localization properties for the eigenstates that lie in the droplet sector of the random-field spin- \\frac 1 2 XXZ chain. These states satisfy a basic single cluster localization property (SCLP), derived in Elgart et al (2018 J. Funct. Anal. (in press)). This leads to many consequences, including dynamical exponential clustering, non-spreading of information under the time evolution, and a zero velocity Lieb-Robinson bound. Since SCLP is only applicable to the droplet sector, our definitions and proofs do not rely on knowledge of the spectral and dynamical characteristics of the model outside this regime. Rather, to allow for a possible mobility transition, we adapt the notion of restricting the Hamiltonian to an energy window from the single particle setting to the many body context.

  17. Bayesian spatiotemporal analysis of zero-inflated biological population density data by a delta-normal spatiotemporal additive model.

    PubMed

    Arcuti, Simona; Pollice, Alessio; Ribecco, Nunziata; D'Onghia, Gianfranco

    2016-03-01

    We evaluate the spatiotemporal changes in the density of a particular species of crustacean known as deep-water rose shrimp, Parapenaeus longirostris, based on biological sample data collected during trawl surveys carried out from 1995 to 2006 as part of the international project MEDITS (MEDiterranean International Trawl Surveys). As is the case for many biological variables, density data are continuous and characterized by unusually large amounts of zeros, accompanied by a skewed distribution of the remaining values. Here we analyze the normalized density data by a Bayesian delta-normal semiparametric additive model including the effects of covariates, using penalized regression with low-rank thin-plate splines for nonlinear spatial and temporal effects. Modeling the zero and nonzero values by two joint processes, as we propose in this work, allows to obtain great flexibility and easily handling of complex likelihood functions, avoiding inaccurate statistical inferences due to misclassification of the high proportion of exact zeros in the model. Bayesian model estimation is obtained by Markov chain Monte Carlo simulations, suitably specifying the complex likelihood function of the zero-inflated density data. The study highlights relevant nonlinear spatial and temporal effects and the influence of the annual Mediterranean oscillations index and of the sea surface temperature on the distribution of the deep-water rose shrimp density. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The role of anthropogenic aerosol emission reduction in achieving the Paris Agreement's objective

    NASA Astrophysics Data System (ADS)

    Hienola, Anca; Pietikäinen, Joni-Pekka; O'Donnell, Declan; Partanen, Antti-Ilari; Korhonen, Hannele; Laaksonen, Ari

    2017-04-01

    The Paris agreement reached in December 2015 under the auspices of the United Nation Framework Convention on Climate Change (UNFCCC) aims at holding the global temperature increase to well below 2◦C above preindustrial levels and "to pursue efforts to limit the temperature increase to 1.5◦C above preindustrial levels". Limiting warming to any level implies that the total amount of carbon dioxide (CO2) - the dominant driver of long-term temperatures - that can ever be emitted into the atmosphere is finite. Essentially, this means that global CO2 emissions need to become net zero. CO2 is not the only pollutant causing warming, although it is the most persistent. Short-lived, non-CO2 climate forcers also must also be considered. Whereas much effort has been put into defining a threshold for temperature increase and zero net carbon emissions, surprisingly little attention has been paid to the non-CO2 climate forcers, including not just the non-CO2 greenhouse gases (methane (CH4), nitrous oxide (N2O), halocarbons etc.) but also the anthropogenic aerosols like black carbon (BC), organic carbon (OC) and sulfate. This study investigates the possibility of limiting the temperature increase to 1.5◦C by the end of the century under different future scenarios of anthropogenic aerosol emissions simulated with the very simplistic MAGICC climate carbon cycle model as well as with ECHAM6.1-HAM2.2-SALSA + UVic ESCM. The simulations include two different CO2 scenarios- RCP3PD as control and a CO2 reduction leading to 1.5◦C (which translates into reaching the net zero CO2 emissions by mid 2040s followed by negative emissions by the end of the century); each CO2 scenario includes also two aerosol pollution control cases denoted with CLE (current legislation) and MFR (maximum feasible reduction). The main result of the above scenarios is that the stronger the anthropogenic aerosol emission reduction is, the more significant the temperature increase by 2100 relative to pre-industrial temperature will be, making the 1.5◦C temperature goal impossible to reach. Although the global reduction of anthropogenic aerosols can greatly enforce the global warming effect due to GHGs, all our simulations resulted in temperature increase bellow (but not well bellow) 2◦C above preindustrial levels - a slightly more realistic target compared to 1.5◦C. The results of this study are based on simulations of only two climate models. As such, we do not regard these results as indisputable, but we consider that aerosols and their effect on climate deserve more attention when discussing future aerosol emission.

  19. Kinematics, structural mechanics, and design of origami structures with smooth folds

    NASA Astrophysics Data System (ADS)

    Peraza Hernandez, Edwin Alexander

    Origami provides novel approaches to the fabrication, assembly, and functionality of engineering structures in various fields such as aerospace, robotics, etc. With the increase in complexity of the geometry and materials for origami structures that provide engineering utility, computational models and design methods for such structures have become essential. Currently available models and design methods for origami structures are generally limited to the idealization of the folds as creases of zeroth-order geometric continuity. Such an idealization is not proper for origami structures having non-negligible thickness or maximum curvature at the folds restricted by material limitations. Thus, for general structures, creased folds of merely zeroth-order geometric continuity are not appropriate representations of structural response and a new approach is needed. The first contribution of this dissertation is a model for the kinematics of origami structures having realistic folds of non-zero surface area and exhibiting higher-order geometric continuity, here termed smooth folds. The geometry of the smooth folds and the constraints on their associated kinematic variables are presented. A numerical implementation of the model allowing for kinematic simulation of structures having arbitrary fold patterns is also described. Examples illustrating the capability of the model to capture realistic structural folding response are provided. Subsequently, a method for solving the origami design problem of determining the geometry of a single planar sheet and its pattern of smooth folds that morphs into a given three-dimensional goal shape, discretized as a polygonal mesh, is presented. The design parameterization of the planar sheet and the constraints that allow for a valid pattern of smooth folds and approximation of the goal shape in a known folded configuration are presented. Various testing examples considering goal shapes of diverse geometries are provided. Afterwards, a model for the structural mechanics of origami continuum bodies with smooth folds is presented. Such a model entails the integration of the presented kinematic model and existing plate theories in order to obtain a structural representation for folds having non-zero thickness and comprised of arbitrary materials. The model is validated against finite element analysis. The last contribution addresses the design and analysis of active material-based self-folding structures that morph via simultaneous folding towards a given three-dimensional goal shape starting from a planar configuration. Implementation examples including shape memory alloy (SMA)-based self-folding structures are provided.

  20. Zero-static power radio-frequency switches based on MoS2 atomristors.

    PubMed

    Kim, Myungsoo; Ge, Ruijing; Wu, Xiaohan; Lan, Xing; Tice, Jesse; Lee, Jack C; Akinwande, Deji

    2018-06-28

    Recently, non-volatile resistance switching or memristor (equivalently, atomristor in atomic layers) effect was discovered in transitional metal dichalcogenides (TMD) vertical devices. Owing to the monolayer-thin transport and high crystalline quality, ON-state resistances below 10 Ω are achievable, making MoS 2 atomristors suitable as energy-efficient radio-frequency (RF) switches. MoS 2 RF switches afford zero-hold voltage, hence, zero-static power dissipation, overcoming the limitation of transistor and mechanical switches. Furthermore, MoS 2 switches are fully electronic and can be integrated on arbitrary substrates unlike phase-change RF switches. High-frequency results reveal that a key figure of merit, the cutoff frequency (f c ), is about 10 THz for sub-μm 2 switches with favorable scaling that can afford f c above 100 THz for nanoscale devices, exceeding the performance of contemporary switches that suffer from an area-invariant scaling. These results indicate a new electronic application of TMDs as non-volatile switches for communication platforms, including mobile systems, low-power internet-of-things, and THz beam steering.

  1. Quantum transport of two-species Dirac fermions in dual-gated three-dimensional topological insulators

    DOE PAGES

    Xu, Yang; Miotkowski, Ireneusz; Chen, Yong P.

    2016-05-04

    Topological insulators are a novel class of quantum matter with a gapped insulating bulk, yet gapless spin-helical Dirac fermion conducting surface states. Here, we report local and non-local electrical and magneto transport measurements in dual-gated BiSbTeSe 2 thin film topological insulator devices, with conduction dominated by the spatially separated top and bottom surfaces, each hosting a single species of Dirac fermions with independent gate control over the carrier type and density. We observe many intriguing quantum transport phenomena in such a fully tunable two-species topological Dirac gas, including a zero-magnetic-field minimum conductivity close to twice the conductance quantum at themore » double Dirac point, a series of ambipolar two-component half-integer Dirac quantum Hall states and an electron-hole total filling factor zero state (with a zero-Hall plateau), exhibiting dissipationless (chiral) and dissipative (non-chiral) edge conduction, respectively. As a result, such a system paves the way to explore rich physics, ranging from topological magnetoelectric effects to exciton condensation.« less

  2. Holographic anisotropic background with confinement-deconfinement phase transition

    NASA Astrophysics Data System (ADS)

    Aref'eva, Irina; Rannu, Kristina

    2018-05-01

    We present new anisotropic black brane solutions in 5D Einstein-dilaton-two-Maxwell system. The anisotropic background is specified by an arbitrary dynamical exponent ν, a nontrivial warp factor, a non-zero dilaton field, a non-zero time component of the first Maxwell field and a non-zero longitudinal magnetic component of the second Maxwell field. The blackening function supports the Van der Waals-like phase transition between small and large black holes for a suitable first Maxwell field charge. The isotropic case corresponding to ν = 1 and zero magnetic field reproduces previously known solutions. We investigate the anisotropy influence on the thermodynamic properties of our background, in particular, on the small/large black holes phase transition diagram. We discuss applications of the model to the bottom-up holographic QCD. The RG flow interpolates between the UV section with two suppressed transversal coordinates and the IR section with the suppressed time and longitudinal coordinates due to anisotropic character of our solution. We study the temporal Wilson loops, extended in longitudinal and transversal directions, by calculating the minimal surfaces of the corresponding probing open string world-sheet in anisotropic backgrounds with various temperatures and chemical potentials. We find that dynamical wall locations depend on the orientation of the quark pairs, that gives a crossover transition line between confinement/deconfinement phases in the dual gauge theory. Instability of the background leads to the appearance of the critical points ( μ ϑ,b , T ϑ,b ) depending on the orientation ϑ of quark-antiquark pairs in respect to the heavy ions collision line.

  3. 40 CFR 86.140-94 - Exhaust sample analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-cycle and methanol-fueled, natural gas-fueled and liquefied petroleum gas-fueled (if non-heated FID option is used) diesel vehicle HC: (1) Zero the analyzers and obtain a stable zero reading. Recheck after...: (1) Zero HFID analyzer and obtain a stable zero reading. (2) Introduce span gas and set instrument...

  4. 40 CFR 86.140-94 - Exhaust sample analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-cycle and methanol-fueled, natural gas-fueled and liquefied petroleum gas-fueled (if non-heated FID option is used) diesel vehicle HC: (1) Zero the analyzers and obtain a stable zero reading. Recheck after...: (1) Zero HFID analyzer and obtain a stable zero reading. (2) Introduce span gas and set instrument...

  5. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less

  6. Probing the cross-effect of strains in non-linear elasticity of nearly regular polymer networks by pure shear deformation.

    PubMed

    Katashima, Takuya; Urayama, Kenji; Chung, Ung-il; Sakai, Takamasa

    2015-05-07

    The pure shear deformation of the Tetra-polyethylene glycol gels reveals the presence of an explicit cross-effect of strains in the strain energy density function even for the polymer networks with nearly regular structure including no appreciable amount of structural defect such as trapped entanglement. This result is in contrast to the expectation of the classical Gaussian network model (Neo Hookean model), i.e., the vanishing of the cross effect in regular networks with no trapped entanglement. The results show that (1) the cross effect of strains is not dependent on the network-strand length; (2) the cross effect is not affected by the presence of non-network strands; (3) the cross effect is proportional to the network polymer concentration including both elastically effective and ineffective strands; (4) no cross effect is expected exclusively in zero limit of network concentration in real polymer networks. These features indicate that the real polymer networks with regular network structures have an explicit cross-effect of strains, which originates from some interaction between network strands (other than entanglement effect) such as nematic interaction, topological interaction, and excluded volume interaction.

  7. Modeling of switching regulator power stages with and without zero-inductor-current dwell time

    NASA Technical Reports Server (NTRS)

    Lee, F. C.; Yu, Y.; Triner, J. E.

    1976-01-01

    State space techniques are employed to derive accurate models for buck, boost, and buck/boost converter power stages operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without the dwell time as a special case of the discontinuous-current mode, when the dwell time vanishes. An abrupt change of system behavior including a reduction of the system order when the dwell time appears is shown both analytically and experimentally.

  8. Implementation and evaluation of PM2.5 source contribution ...

    EPA Pesticide Factsheets

    Source culpability assessments are useful for developing effective emissions control programs. The Integrated Source Apportionment Method (ISAM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to track contributions from source groups and regions to ambient levels and deposited amounts of primary and secondary inorganic PM2.5. Confidence in this approach is established by comparing ISAM source contribution estimates to emissions zero-out simulations recognizing that these approaches are not always expected to provide the same answer. The comparisons are expected to be most similar for more linear processes such as those involving primary emissions of PM2.5 and most different for non-linear systems like ammonium nitrate formation. Primarily emitted PM2.5 (e.g. elemental carbon), sulfur dioxide, ammonia, and nitrogen oxide contribution estimates compare well to zero-out estimates for ambient concentration and deposition. PM2.5 sulfate ion relationships are strong, but nonlinearity is evident and shown to be related to aqueous phase oxidation reactions in the host model. ISAM and zero-out contribution estimates are less strongly related for PM2.5 ammonium nitrate, resulting from instances of non-linear chemistry and negative responses (increases in PM2.5 due to decreases in emissions). ISAM is demonstrated in the context of an annual simulation tracking well characterized emissions source sectors and boundary conditions shows source contri

  9. Entanglement sum rules.

    PubMed

    Swingle, Brian

    2013-09-06

    We compute the entanglement entropy of a wide class of models that may be characterized as describing matter coupled to gauge fields. Our principle result is an entanglement sum rule that states that the entropy of the full system is the sum of the entropies of the two components. In the context of the models we consider, this result applies to the full entropy, but more generally it is a statement about the additivity of universal terms in the entropy. Our proof simultaneously extends and simplifies previous arguments, with extensions including new models at zero temperature as well as the ability to treat finite temperature crossovers. We emphasize that while the additivity is an exact statement, each term in the sum may still be difficult to compute. Our results apply to a wide variety of phases including Fermi liquids, spin liquids, and some non-Fermi liquid metals. For example, we prove that our model of an interacting Fermi liquid has exactly the log violation of the area law for entanglement entropy predicted by the Widom formula in agreement with earlier arguments.

  10. A study on industrial accident rate forecasting and program development of estimated zero accident time in Korea.

    PubMed

    Kim, Tae-gu; Kang, Young-sig; Lee, Hyung-won

    2011-01-01

    To begin a zero accident campaign for industry, the first thing is to estimate the industrial accident rate and the zero accident time systematically. This paper considers the social and technical change of the business environment after beginning the zero accident campaign through quantitative time series analysis methods. These methods include sum of squared errors (SSE), regression analysis method (RAM), exponential smoothing method (ESM), double exponential smoothing method (DESM), auto-regressive integrated moving average (ARIMA) model, and the proposed analytic function method (AFM). The program is developed to estimate the accident rate, zero accident time and achievement probability of an efficient industrial environment. In this paper, MFC (Microsoft Foundation Class) software of Visual Studio 2008 was used to develop a zero accident program. The results of this paper will provide major information for industrial accident prevention and be an important part of stimulating the zero accident campaign within all industrial environments.

  11. Disassembling the clockwork mechanism

    NASA Astrophysics Data System (ADS)

    Craig, Nathaniel; Garcia Garcia, Isabel; Sutherland, Dave

    2017-10-01

    The clockwork mechanism is a means of naturally generating exponential hierarchies in theories without significant hierarchies among fundamental parameters. We emphasize the role of interactions in the clockwork mechanism, demonstrating that clockwork is an intrinsically abelian phenomenon precluded in non-abelian theories such as Yang-Mills, non-linear sigma models, and gravity. We also show that clockwork is not realized in extra-dimensional theories through purely geometric effects, but may be generated by appropriate localization of zero modes.

  12. Anomalous scaling of a passive scalar advected by the Navier-Stokes velocity field: two-loop approximation.

    PubMed

    Adzhemyan, L Ts; Antonov, N V; Honkonen, J; Kim, T L

    2005-01-01

    The field theoretic renormalization group and operator-product expansion are applied to the model of a passive scalar quantity advected by a non-Gaussian velocity field with finite correlation time. The velocity is governed by the Navier-Stokes equation, subject to an external random stirring force with the correlation function proportional to delta(t- t')k(4-d-2epsilon). It is shown that the scalar field is intermittent already for small epsilon, its structure functions display anomalous scaling behavior, and the corresponding exponents can be systematically calculated as series in epsilon. The practical calculation is accomplished to order epsilon2 (two-loop approximation), including anisotropic sectors. As for the well-known Kraichnan rapid-change model, the anomalous scaling results from the existence in the model of composite fields (operators) with negative scaling dimensions, identified with the anomalous exponents. Thus the mechanism of the origin of anomalous scaling appears similar for the Gaussian model with zero correlation time and the non-Gaussian model with finite correlation time. It should be emphasized that, in contrast to Gaussian velocity ensembles with finite correlation time, the model and the perturbation theory discussed here are manifestly Galilean covariant. The relevance of these results for real passive advection and comparison with the Gaussian models and experiments are briefly discussed.

  13. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Comparing statistical methods for analyzing skewed longitudinal count data with many zeros: an example of smoking cessation.

    PubMed

    Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E

    2013-07-01

    Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Analysis of capillary drainage from a flat solid strip

    NASA Astrophysics Data System (ADS)

    Ramé, Enrique; Zimmerli, Gregory A.

    2014-06-01

    A long and narrow solid strip coated with a thin liquid layer is used as a model of a generic fluid mass probe in a spacecraft propellant tank just after a small thruster firing. The drainage dynamics of the initial coating layer into the settled bulk fluid affects the interpretation of probe measurements as the sensors' signal depends strongly on whether a sensor is in contact with vapor or with liquid. We analyze the drainage under various conditions of zero-gravity (i.e., capillary drainage) and with gravity aligned with the strip length, corresponding to the thruster acceleration. Long-time analytical solutions are found for zero and non-zero gravity. In the case with gravity, an approximate solution is found using matched asymptotics. Estimates show that a thrust of 10-3g0 significantly reduces drainage times.

  16. Zero-field quantum critical point in Ce0.91Yb0.09CoIn5

    NASA Astrophysics Data System (ADS)

    Singh, Y. P.; Adhikari, R. B.; Haney, D. J.; White, B. D.; Maple, M. B.; Dzero, M.; Almasan, C. C.

    2018-05-01

    We present results of specific heat, electrical resistance, and magnetoresistivity measurements on single crystals of the heavy-fermion superconducting alloy Ce0.91Yb0.09CoIn5 . Non-Fermi-liquid to Fermi-liquid crossovers are clearly observed in the temperature dependence of the Sommerfeld coefficient γ and resistivity data. Furthermore, we show that the Yb-doped sample with x =0.09 exhibits universality due to an underlying quantum phase transition without an applied magnetic field by utilizing the scaling analysis of γ . Fitting of the heat capacity and resistivity data based on existing theoretical models indicates that the zero-field quantum critical point is of antiferromagnetic origin. Finally, we found that at zero magnetic field the system undergoes a third-order phase transition at the temperature Tc 3≈7 K.

  17. 40 CFR 89.324 - Calibration of other equipment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...

  18. 40 CFR 89.324 - Calibration of other equipment.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...

  19. 40 CFR 89.324 - Calibration of other equipment.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...

  20. 40 CFR 89.324 - Calibration of other equipment.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... and operation. Adjust the analyzer to optimize performance. (2) Zero the methane analyzer with zero...-fit straight line is 2 percent or less of the value at each non-zero data point and within ± 0.3 percent of full scale on the zero, concentration values may be calculated by use of a single calibration...

  1. A Hedonic Approach to Estimating Software Cost Using Ordinary Least Squares Regression and Nominal Attribute Variables

    DTIC Science & Technology

    2006-03-01

    included zero, there is insufficient evidence to indicate that the error mean is 35 not zero. The Breusch - Pagan test was used to test the constant...Multicollinearity .............................................................................. 33 Testing OLS Assumptions...programming styles used by developers (Stamelos and others, 2003:733). Kemerer tested to see how models utilizing SLOC as an independent variable

  2. Identifying microturbulence regimes in a TCV discharge making use of physical constraints on particle and heat fluxes

    DOE PAGES

    Mariani, Alberto; Brunner, S.; Dominski, J.; ...

    2018-01-17

    Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less

  3. Identifying microturbulence regimes in a TCV discharge making use of physical constraints on particle and heat fluxes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariani, Alberto; Brunner, S.; Dominski, J.

    Reducing the uncertainty on physical input parameters derived from experimental measurements is essential towards improving the reliability of gyrokinetic turbulence simulations. This can be achieved by introducing physical constraints. Amongst them, the zero particle flux condition is considered here. A first attempt is also made to match as well the experimental ion/electron heat flux ratio. This procedure is applied to the analysis of a particular Tokamak à Configuration Variable discharge. A detailed reconstruction of the zero particle flux hyper-surface in the multi-dimensional physical parameter space at fixed time of the discharge is presented, including the effect of carbon as themore » main impurity. Both collisionless and collisional regimes are considered. Hyper-surface points within the experimental error bars are found. In conclusion, the analysis is done performing gyrokinetic simulations with the local version of the GENE code, computing the fluxes with a Quasi-Linear (QL) model and validating the QL results with non-linear simulations in a subset of cases.« less

  4. Random-anisotropy model: Monotonic dependence of the coercive field on D/J

    NASA Astrophysics Data System (ADS)

    Saslow, W. M.; Koon, N. C.

    1994-02-01

    We present the results of a numerical study of the zero-temperature remanence and coercivity for the random anisotropy model (RAM), showing that, contrary to early calculations for this model, the coercive field increases monotonically with increases in the strength D of the random anisotropy relative to the strength J at the exchange field. Local-field adjustments with and without spin flips are considered. Convergence is difficult to obtain for small values of the anisotropy, suggesting that this is the likely source of the nonmonotonic behavior found in earlier studies. For both large and small anisotropy, each spin undergoes about one flip per hysteresis cycle, and about half of the spin flips occur in the vicinity of the coercive field. When only non-spin-flip adjustments are considered, at large anisotropy the coercivity is proportional to the anisotropy. At small anisotropy, the rate of convergence is comparable to that when spin flips are included.

  5. Nonlinear unitary quantum collapse model with self-generated noise

    NASA Astrophysics Data System (ADS)

    Geszti, Tamás

    2018-04-01

    Collapse models including some external noise of unknown origin are routinely used to describe phenomena on the quantum-classical border; in particular, quantum measurement. Although containing nonlinear dynamics and thereby exposed to the possibility of superluminal signaling in individual events, such models are widely accepted on the basis of fully reproducing the non-signaling statistical predictions of quantum mechanics. Here we present a deterministic nonlinear model without any external noise, in which randomness—instead of being universally present—emerges in the measurement process, from deterministic irregular dynamics of the detectors. The treatment is based on a minimally nonlinear von Neumann equation for a Stern–Gerlach or Bell-type measuring setup, containing coordinate and momentum operators in a self-adjoint skew-symmetric, split scalar product structure over the configuration space. The microscopic states of the detectors act as a nonlocal set of hidden parameters, controlling individual outcomes. The model is shown to display pumping of weights between setup-defined basis states, with a single winner randomly selected and the rest collapsing to zero. Environmental decoherence has no role in the scenario. Through stochastic modelling, based on Pearle’s ‘gambler’s ruin’ scheme, outcome probabilities are shown to obey Born’s rule under a no-drift or ‘fair-game’ condition. This fully reproduces quantum statistical predictions, implying that the proposed non-linear deterministic model satisfies the non-signaling requirement. Our treatment is still vulnerable to hidden signaling in individual events, which remains to be handled by future research.

  6. End of Life Disposal for Three Libration Point Missions through Manipulation of the Jacobi Constant and Zero Velocity Curves

    NASA Technical Reports Server (NTRS)

    Peterson, Jeremy D.; Brown, Jonathan M.

    2015-01-01

    The aim of this investigation is to determine the feasibility of mission disposal by inserting the spacecraft into a heliocentric orbit along the unstable manifold and then manipulating the Jacobi constant to prevent the spacecraft from returning to the Earth-Moon system. This investigation focuses around L1 orbits representative of ACE, WIND, and SOHO. It will model the impulsive delta-V necessary to close the zero velocity curves after escape through the L1 gateway in the circular restricted three body model and also include full ephemeris force models and higher fidelity finite maneuver models for the three spacecraft.

  7. Modeling of switching regulator power stages with and without zero-inductor-current dwell time

    NASA Technical Reports Server (NTRS)

    Lee, F. C. Y.; Yu, Y.

    1979-01-01

    State-space techniques are employed to derive accurate models for the three basic switching converter power stages: buck, boost, and buck/boost operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without dwell time as a special case of the discontinuous-current mode when the dwell time vanishes. Abrupt changes of system behavior, including a reduction of the system order when the dwell time appears, are shown both analytically and experimentally. Merits resulting from the present modeling technique in comparison with existing modeling techniques are illustrated.

  8. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    NASA Astrophysics Data System (ADS)

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-11-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  9. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero.

    PubMed

    Zhang, G; Stillinger, F H; Torquato, S

    2016-11-28

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a "perfect glass". A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite.

  10. The Perfect Glass Paradigm: Disordered Hyperuniform Glasses Down to Absolute Zero

    PubMed Central

    Zhang, G.; Stillinger, F. H.; Torquato, S.

    2016-01-01

    Rapid cooling of liquids below a certain temperature range can result in a transition to glassy states. The traditional understanding of glasses includes their thermodynamic metastability with respect to crystals. However, here we present specific examples of interactions that eliminate the possibilities of crystalline and quasicrystalline phases, while creating mechanically stable amorphous glasses down to absolute zero temperature. We show that this can be accomplished by introducing a new ideal state of matter called a “perfect glass”. A perfect glass represents a soft-interaction analog of the maximally random jammed (MRJ) packings of hard particles. These latter states can be regarded as the epitome of a glass since they are out of equilibrium, maximally disordered, hyperuniform, mechanically rigid with infinite bulk and shear moduli, and can never crystallize due to configuration-space trapping. Our model perfect glass utilizes two-, three-, and four-body soft interactions while simultaneously retaining the salient attributes of the MRJ state. These models constitute a theoretical proof of concept for perfect glasses and broaden our fundamental understanding of glass physics. A novel feature of equilibrium systems of identical particles interacting with the perfect-glass potential at positive temperature is that they have a non-relativistic speed of sound that is infinite. PMID:27892452

  11. Characterization of binary string statistics for syntactic landmine detection

    NASA Astrophysics Data System (ADS)

    Nasif, Ahmed O.; Mark, Brian L.; Hintz, Kenneth J.

    2011-06-01

    Syntactic landmine detection has been proposed to detect and classify non-metallic landmines using ground penetrating radar (GPR). In this approach, the GPR return is processed to extract characteristic binary strings for landmine and clutter discrimination. In our previous work, we discussed the preprocessing methodology by which the amplitude information of the GPR A-scan signal can be effectively converted into binary strings, which identify the impedance discontinuities in the signal. In this work, we study the statistical properties of the binary string space. In particular, we develop a Markov chain model to characterize the observed bit sequence of the binary strings. The state is defined as the number of consecutive zeros between two ones in the binarized A-scans. Since the strings are highly sparse (the number of zeros is much greater than the number of ones), defining the state this way leads to fewer number of states compared to the case where each bit is defined as a state. The number of total states is further reduced by quantizing the number of consecutive zeros. In order to identify the correct order of the Markov model, the mean square difference (MSD) between the transition matrices of mine strings and non-mine strings is calculated up to order four using training data. The results show that order one or two maximizes this MSD. The specification of the transition probabilities of the chain can be used to compute the likelihood of any given string. Such a model can be used to identify characteristic landmine strings during the training phase. These developments on modeling and characterizing the string statistics can potentially be part of a real-time landmine detection algorithm that identifies landmine and clutter in an adaptive fashion.

  12. Nuclear sensor signal processing circuit

    DOEpatents

    Kallenbach, Gene A [Bosque Farms, NM; Noda, Frank T [Albuquerque, NM; Mitchell, Dean J [Tijeras, NM; Etzkin, Joshua L [Albuquerque, NM

    2007-02-20

    An apparatus and method are disclosed for a compact and temperature-insensitive nuclear sensor that can be calibrated with a non-hazardous radioactive sample. The nuclear sensor includes a gamma ray sensor that generates tail pulses from radioactive samples. An analog conditioning circuit conditions the tail-pulse signals from the gamma ray sensor, and a tail-pulse simulator circuit generates a plurality of simulated tail-pulse signals. A computer system processes the tail pulses from the gamma ray sensor and the simulated tail pulses from the tail-pulse simulator circuit. The nuclear sensor is calibrated under the control of the computer. The offset is adjusted using the simulated tail pulses. Since the offset is set to zero or near zero, the sensor gain can be adjusted with a non-hazardous radioactive source such as, for example, naturally occurring radiation and potassium chloride.

  13. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  14. Competition between Chaotic and Nonchaotic Phases in a Quadratically Coupled Sachdev-Ye-Kitaev Model.

    PubMed

    Chen, Xin; Fan, Ruihua; Chen, Yiming; Zhai, Hui; Zhang, Pengfei

    2017-11-17

    The Sachdev-Ye-Kitaev (SYK) model is a concrete solvable model to study non-Fermi liquid properties, holographic duality, and maximally chaotic behavior. In this work, we consider a generalization of the SYK model that contains two SYK models with a different number of Majorana modes coupled by quadratic terms. This model is also solvable, and the solution shows a zero-temperature quantum phase transition between two non-Fermi liquid chaotic phases. This phase transition is driven by tuning the ratio of two mode numbers, and a nonchaotic Fermi liquid sits at the critical point with an equal number of modes. At a finite temperature, the Fermi liquid phase expands to a finite regime. More intriguingly, a different non-Fermi liquid phase emerges at a finite temperature. We characterize the phase diagram in terms of the spectral function, the Lyapunov exponent, and the entropy. Our results illustrate a concrete example of the quantum phase transition and critical behavior between two non-Fermi liquid phases.

  15. Estimating zero strain states of very soft tissue under gravity loading using digital image correlation⋆,⋆⋆,★

    PubMed Central

    Gao, Zhan; Desai, Jaydev P.

    2009-01-01

    This paper presents several experimental techniques and concepts in the process of measuring mechanical properties of very soft tissue in an ex vivo tensile test. Gravitational body force on very soft tissue causes pre-compression and results in a non-uniform initial deformation. The global Digital Image Correlation technique is used to measure the full field deformation behavior of liver tissue in uniaxial tension testing. A maximum stretching band is observed in the incremental strain field when a region of tissue passes from compression and enters a state of tension. A new method for estimating the zero strain state is proposed: the zero strain position is close to, but ahead of the position of the maximum stretching band, or in other words, the tangent of a nominal stress-stretch curve reaches minimum at λ ≳ 1. The approach, to identify zero strain by using maximum incremental strain, can be implemented in other types of image-based soft tissue analysis. The experimental results of ten samples from seven porcine livers are presented and material parameters for the Ogden model fit are obtained. The finite element simulation based on the fitted model confirms the effect of gravity on the deformation of very soft tissue and validates our approach. PMID:20015676

  16. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    PubMed Central

    Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513

  17. Zero-inflated Poisson model based likelihood ratio test for drug safety signal detection.

    PubMed

    Huang, Lan; Zheng, Dan; Zalkikar, Jyoti; Tiwari, Ram

    2017-02-01

    In recent decades, numerous methods have been developed for data mining of large drug safety databases, such as Food and Drug Administration's (FDA's) Adverse Event Reporting System, where data matrices are formed by drugs such as columns and adverse events as rows. Often, a large number of cells in these data matrices have zero cell counts and some of them are "true zeros" indicating that the drug-adverse event pairs cannot occur, and these zero counts are distinguished from the other zero counts that are modeled zero counts and simply indicate that the drug-adverse event pairs have not occurred yet or have not been reported yet. In this paper, a zero-inflated Poisson model based likelihood ratio test method is proposed to identify drug-adverse event pairs that have disproportionately high reporting rates, which are also called signals. The maximum likelihood estimates of the model parameters of zero-inflated Poisson model based likelihood ratio test are obtained using the expectation and maximization algorithm. The zero-inflated Poisson model based likelihood ratio test is also modified to handle the stratified analyses for binary and categorical covariates (e.g. gender and age) in the data. The proposed zero-inflated Poisson model based likelihood ratio test method is shown to asymptotically control the type I error and false discovery rate, and its finite sample performance for signal detection is evaluated through a simulation study. The simulation results show that the zero-inflated Poisson model based likelihood ratio test method performs similar to Poisson model based likelihood ratio test method when the estimated percentage of true zeros in the database is small. Both the zero-inflated Poisson model based likelihood ratio test and likelihood ratio test methods are applied to six selected drugs, from the 2006 to 2011 Adverse Event Reporting System database, with varying percentages of observed zero-count cells.

  18. The Witness-Voting System

    NASA Astrophysics Data System (ADS)

    Gerck, Ed

    We present a new, comprehensive framework to qualitatively improve election outcome trustworthiness, where voting is modeled as an information transfer process. Although voting is deterministic (all ballots are counted), information is treated stochastically using Information Theory. Error considerations, including faults, attacks, and threats by adversaries, are explicitly included. The influence of errors may be corrected to achieve an election outcome error as close to zero as desired (error-free), with a provably optimal design that is applicable to any type of voting, with or without ballots. Sixteen voting system requirements, including functional, performance, environmental and non-functional considerations, are derived and rated, meeting or exceeding current public-election requirements. The voter and the vote are unlinkable (secret ballot) although each is identifiable. The Witness-Voting System (Gerck, 2001) is extended as a conforming implementation of the provably optimal design that is error-free, transparent, simple, scalable, robust, receipt-free, universally-verifiable, 100% voter-verified, and end-to-end audited.

  19. Inverse scattering transform and soliton solutions for square matrix nonlinear Schrödinger equations with non-zero boundary conditions

    NASA Astrophysics Data System (ADS)

    Prinari, Barbara; Demontis, Francesco; Li, Sitai; Horikis, Theodoros P.

    2018-04-01

    The inverse scattering transform (IST) with non-zero boundary conditions at infinity is developed for an m × m matrix nonlinear Schrödinger-type equation which, in the case m = 2, has been proposed as a model to describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions (self-defocusing case), or attractive interatomic interactions and ferromagnetic spin-exchange interactions (self-focusing case). The IST for this system was first presented by Ieda et al. (2007) , using a different approach. In our formulation, both the direct and the inverse problems are posed in terms of a suitable uniformization variable which allows to develop the IST on the standard complex plane, instead of a two-sheeted Riemann surface or the cut plane with discontinuities along the cuts. Analyticity of the scattering eigenfunctions and scattering data, symmetries, properties of the discrete spectrum, and asymptotics are derived. The inverse problem is posed as a Riemann-Hilbert problem for the eigenfunctions, and the reconstruction formula of the potential in terms of eigenfunctions and scattering data is provided. In addition, the general behavior of the soliton solutions is analyzed in detail in the 2 × 2 self-focusing case, including some special solutions not previously discussed in the literature.

  20. Reflection and Non-Reflection of Particle Wavepackets

    ERIC Educational Resources Information Center

    Cox, Timothy; Lekner, John

    2008-01-01

    Exact closed-form solutions of the time-dependent Schrodinger equation are obtained, describing the propagation of wavepackets in the neighbourhood of a potential. Examples given include zero reflection, total reflection and partial reflection of the wavepacket, for the sech[superscript 2]x/a, 1/x[superscript 2] and delta(x) potentials,…

  1. Symmetron and de Sitter attractor in a teleparallel model of cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadjadi, H. Mohseni, E-mail: mohsenisad@ut.ac.ir

    In the teleparallel framework of cosmology, a quintessence with non-minimal couplings to the scalar torsion and a boundary term is considered. A conformal coupling to matter density is also taken into account. It is shown that the model can describe onset of cosmic acceleration after an epoch of matter dominated era, where dark energy is negligible, via Z {sub 2} symmetry breaking. While the conformal coupling holds the Universe in a state with zero dark energy density in the early epoch, the non-minimal couplings lead the Universe to a stable state with de Sitter expansion at late time.

  2. Rational group decision making: A random field Ising model at T = 0

    NASA Astrophysics Data System (ADS)

    Galam, Serge

    1997-02-01

    A modified version of a finite random field Ising ferromagnetic model in an external magnetic field at zero temperature is presented to describe group decision making. Fields may have a non-zero average. A postulate of minimum inter-individual conflicts is assumed. Interactions then produce a group polarization along one very choice which is however randomly selected. A small external social pressure is shown to have a drastic effect on the polarization. Individual bias related to personal backgrounds, cultural values and past experiences are introduced via quenched local competing fields. They are shown to be instrumental in generating a larger spectrum of collective new choices beyond initial ones. In particular, compromise is found to results from the existence of individual competing bias. Conflict is shown to weaken group polarization. The model yields new psychosociological insights about consensus and compromise in groups.

  3. 40 CFR 1065.362 - Non-stoichiometric raw exhaust FID O2 interference verification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... air source during testing, use zero air as the FID burner's air source for this verification. (4) Zero the FID analyzer using the zero gas used during emission testing. (5) Span the FID analyzer using a span gas that you use during emission testing. (6) Check the zero response of the FID analyzer using...

  4. A new modelling and identification scheme for time-delay systems with experimental investigation: a relay feedback approach

    NASA Astrophysics Data System (ADS)

    Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit

    2017-07-01

    In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.

  5. Chirp-free optical return-to-zero modulation based on a single microring resonator.

    PubMed

    Sun, Lili; Ye, Tong; Wang, Xiaowen; Zhou, Linjie; Chen, Jianping

    2012-03-26

    This paper proposes a chirp-free optical return-to-zero (RZ) modulator using a double coupled microring resonator. Optical RZ modulation is achieved by applying a clock (CLK) driving signal to the input coupling region and a non-return-to-zero (NRZ) driving signal to the output coupling region. Static and time-domain coupled-mode theory (CMT) based dynamic analyse are performed to theoretically investigate its performance in RZ modulation. The criteria to realize RZ modulation are deduced. Various RZ modulation formats, including RZ phase-shift-keying (RZ-PSK), carrier-suppressed RZ (CSRZ), and RZ intensity modulation formats, can be implemented by using CLK and NRZ signals with different combinations of polarities. Numerical simulations are performed and the feasibility of our modulator at 10 Gbit/s for the multiple RZ modulation formats is verified.

  6. Non-driving intersegmental knee moments in cycling computed using a model that includes three-dimensional kinematics of the shank/foot and the effect of simplifying assumptions.

    PubMed

    Gregersen, Colin S; Hull, M L

    2003-06-01

    Assessing the importance of non-driving intersegmental knee moments (i.e. varus/valgus and internal/external axial moments) on over-use knee injuries in cycling requires the use of a three-dimensional (3-D) model to compute these loads. The objectives of this study were: (1) to develop a complete, 3-D model of the lower limb to calculate the 3-D knee loads during pedaling for a sample of the competitive cycling population, and (2) to examine the effects of simplifying assumptions on the calculations of the non-driving knee moments. The non-driving knee moments were computed using a complete 3-D model that allowed three rotational degrees of freedom at the knee joint, included the 3-D inertial loads of the shank/foot, and computed knee loads in a shank-fixed coordinate system. All input data, which included the 3-D segment kinematics and the six pedal load components, were collected from the right limb of 15 competitive cyclists while pedaling at 225 W and 90 rpm. On average, the peak varus and internal axial moments of 7.8 and 1.5 N m respectively occurred during the power stroke whereas the peak valgus and external axial moments of 8.1 and 2.5 N m respectively occurred during the recovery stroke. However, the non-driving knee moments were highly variable between subjects; the coefficients of variability in the peak values ranged from 38.7% to 72.6%. When it was assumed that the inertial loads of the shank/foot for motion out of the sagittal plane were zero, the root-mean-squared difference (RMSD) in the non-driving knee moments relative to those for the complete model was 12% of the peak varus/valgus moment and 25% of the peak axial moment. When it was also assumed that the knee joint was revolute with the flexion/extension axis perpendicular to the sagittal plane, the RMSD increased to 24% of the peak varus/valgus moment and 204% of the peak axial moment. Thus, the 3-D orientation of the shank segment has a major affect on the computation of the non-driving knee moments, while the inertial contributions to these loads for motions out of the sagittal plane are less important.

  7. Targeting zero non-attendance in healthcare clinics.

    PubMed

    Chan, Ka C; Chan, David B

    2012-01-01

    Non-attendance represents a significant cost to many health systems, resulting in inefficiency, wasted resources, poorer service delivery and lengthened waiting queues. Past studies have considered extensively the reasons for non-attendance and have generally concluded that the use of reminder systems is effective. Despite this, there will always be a certain level of non-attendance arising from unforeseeable and unpreventable circumstances, such as illness or accidents, leading to unfilled appointments. This paper reviews current approaches to the non-attendance problem, and presents a high-level approach to fill last minute appointments arising out of unforeseeable non-attendance. However, no single approach will work for all clinics and implementation of these ideas must occur at a local level. These approaches include use of social networks, such as Twitter and Facebook, as a communication tool in order to notify prospective patients when last-minute appointments become available. In addition, teleconsultation using video-conferencing technologies would be suitable for certain last-minute appointments where travel time would otherwise be inhibiting. Developments of new and innovative technologies and the increasing power of social media, means that zero non-attendance is now an achievable target. We hope that this will lead to more evidence-based evaluations from the implementation of these strategies in various settings at a local level.

  8. Extra compressibility terms for Favre-averaged two-equation models of inhomogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Rubesin, Morris W.

    1990-01-01

    Forms of extra-compressibility terms that result from use of Favre averaging of the turbulence transport equations for kinetic energy and dissipation are derived. These forms introduce three new modeling constants, a polytropic coefficient that defines the interrelationships of the pressure, density, and enthalpy fluctuations and two constants in the dissipation equation that account for the non-zero pressure-dilitation and mean pressure gradients.

  9. Wave fluctuations in the system with some Yang-Mills condensates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prokhorov, G., E-mail: zhoraprox@yandex.ru; Pasechnik, R., E-mail: Roman.Pasechnik@thep.lu.se; Vereshkov, G., E-mail: gveresh@gmail.com

    2016-12-15

    Self-consistent dynamics of non-homogeneous fluctuations and homogeneous and isotropic condensate of Yang–Mills fields was investigated in zero, linear and quasilinear approximations over the wave modes in the framework of N = 4 supersymmetric model in Hamilton gauge in quasiclassical theory. The models with SU(2), SU(3) and SU(4) gauge groups were considered. Particle production effect and effect of generation of longitudinal oscillations were obtained.

  10. Disease Mapping of Zero-excessive Mesothelioma Data in Flanders

    PubMed Central

    Neyens, Thomas; Lawson, Andrew B.; Kirby, Russell S.; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S.; Faes, Christel

    2016-01-01

    Purpose To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. Methods The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero-inflation and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. Results The results indicate that hurdle models with a random effects term accounting for extra-variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra-variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra-variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Conclusions Models taking into account zero-inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. PMID:27908590

  11. Disease mapping of zero-excessive mesothelioma data in Flanders.

    PubMed

    Neyens, Thomas; Lawson, Andrew B; Kirby, Russell S; Nuyts, Valerie; Watjou, Kevin; Aregay, Mehreteab; Carroll, Rachel; Nawrot, Tim S; Faes, Christel

    2017-01-01

    To investigate the distribution of mesothelioma in Flanders using Bayesian disease mapping models that account for both an excess of zeros and overdispersion. The numbers of newly diagnosed mesothelioma cases within all Flemish municipalities between 1999 and 2008 were obtained from the Belgian Cancer Registry. To deal with overdispersion, zero inflation, and geographical association, the hurdle combined model was proposed, which has three components: a Bernoulli zero-inflation mixture component to account for excess zeros, a gamma random effect to adjust for overdispersion, and a normal conditional autoregressive random effect to attribute spatial association. This model was compared with other existing methods in literature. The results indicate that hurdle models with a random effects term accounting for extra variance in the Bernoulli zero-inflation component fit the data better than hurdle models that do not take overdispersion in the occurrence of zeros into account. Furthermore, traditional models that do not take into account excessive zeros but contain at least one random effects term that models extra variance in the counts have better fits compared to their hurdle counterparts. In other words, the extra variability, due to an excess of zeros, can be accommodated by spatially structured and/or unstructured random effects in a Poisson model such that the hurdle mixture model is not necessary. Models taking into account zero inflation do not always provide better fits to data with excessive zeros than less complex models. In this study, a simple conditional autoregressive model identified a cluster in mesothelioma cases near a former asbestos processing plant (Kapelle-op-den-Bos). This observation is likely linked with historical local asbestos exposures. Future research will clarify this. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Silverton Conference on Applications of the Zero Gravity Space Shuttle Environment to Problems in Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Eisner, M. (Editor)

    1974-01-01

    The possible utilization of the zero gravity resource for studies in a variety of fluid dynamics and fluid-dynamic related problems was investigated. A group of experiments are discussed and described in detail; these include experiments in the areas of geophysical fluid models, fluid dynamics, mass transfer processes, electrokinetic separation of large particles, and biophysical and physiological areas.

  13. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE PAGES

    Gao, Kai; Huang, Lianjie

    2017-11-13

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  14. Optimal damping profile ratios for stabilization of perfectly matched layers in general anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Huang, Lianjie

    Conventional perfectly matched layers (PML) can be unstable for certain kinds of anisotropic media. Multi-axial PML removes such instability using nonzero damping coe cients in the directions tangential with the PML interface. While using non-zero damping pro le ratios can stabilize PML, it is important to obtain the smallest possible damping pro le ratios to minimize arti cial re ections caused by these non-zero ratios, particularly for 3D general anisotropic media. Using the eigenvectors of the PML system matrix, we develop a straightforward and e cient numerical algorithm to determine the optimal damping pro le ratios to stabilize PML inmore » 2D and 3D general anisotropic media. Numerical examples show that our algorithm provides optimal damping pro le ratios to ensure the stability of PML and complex-frequency-shifted PML for elastic-wave modeling in 2D and 3D general anisotropic media.« less

  15. Zero-Field Ambient-Pressure Quantum Criticality in the Stoichiometric Non-Fermi Liquid System CeRhBi

    NASA Astrophysics Data System (ADS)

    Anand, Vivek K.; Adroja, Devashibhai T.; Hillier, Adrian D.; Shigetoh, Keisuke; Takabatake, Toshiro; Park, Je-Geun; McEwen, Keith A.; Pixley, Jedediah H.; Si, Qimiao

    2018-06-01

    We present the spin dynamics study of a stoichiometric non-Fermi liquid (NFL) system CeRhBi, using low-energy inelastic neutron scattering (INS) and muon spin relaxation (μSR) measurements. It shows evidence for an energy-temperature (E/T) scaling in the INS dynamic response and a time-field (t/Hη) scaling of the μSR asymmetry function indicating a quantum critical behavior in this compound. The E/T scaling reveals a local character of quantum criticality consistent with the power-law divergence of the magnetic susceptibility, logarithmic divergence of the magnetic heat capacity and T-linear resistivity at low temperature. The occurrence of NFL behavior and local criticality over a very wide dynamical range at zero field and ambient pressure without any tuning in this stoichiometric heavy fermion compound is striking, making CeRhBi a model system amenable to in-depth studies for quantum criticality.

  16. A rigorous approach to the formulation of extended Born-Oppenheimer equation for a three-state system

    NASA Astrophysics Data System (ADS)

    Sarkar, Biplab; Adhikari, Satrajit

    If a coupled three-state electronic manifold forms a sub-Hilbert space, it is possible to express the non-adiabatic coupling (NAC) elements in terms of adiabatic-diabatic transformation (ADT) angles. Consequently, we demonstrate: (a) Those explicit forms of the NAC terms satisfy the Curl conditions with non-zero Divergences; (b) The formulation of extended Born-Oppenheimer (EBO) equation for any three-state BO system is possible only when there exists coordinate independent ratio of the gradients for each pair of ADT angles leading to zero Curls at and around the conical intersection(s). With these analytic advancements, we formulate a rigorous EBO equation and explore its validity as well as necessity with respect to the approximate one (Sarkar and Adhikari, J Chem Phys 2006, 124, 074101) by performing numerical calculations on two different models constructed with different chosen forms of the NAC elements.

  17. Some Decays of Neutral Higgs Bosons in the NMSSM

    NASA Astrophysics Data System (ADS)

    Chinh Cuong, Nguyen; Thi Thu Trang, Do; Thi Phuong Thuy, Nguyen

    2014-09-01

    To solve the μ problem of the Minimal Supersymmetric Standard Model (MSSM), a single field S is added to build the Next Minimal Supersymmetric Standard Model (NMSSM). Vacuum enlarged with non-zero vevs of the neutral-even CP is the combination of Hu, Hd and S. In the NMSSM, the higgs sector is increased to 7 higgs (compared with 5 higgs in the MSSM), including three higgs which are even-CP h1,2,3(mh1 < mh2 < mh3), two higgs which are odd-CP a1,2(ma1 < ma2) and a couple of charged higgs H±. The decays higgs into higgs is one of the remarkable new points of the NMSSM. In this paper we study some decays of neutral Higgs bosons. The numerical results are also presented together with evaluations.

  18. A Pearson Effective Potential for Monte Carlo Simulation of Quantum Confinement Effects in nMOSFETs

    NASA Astrophysics Data System (ADS)

    Jaud, Marie-Anne; Barraud, Sylvain; Saint-Martin, Jérôme; Bournel, Arnaud; Dollfus, Philippe; Jaouen, Hervé

    2008-12-01

    A Pearson Effective Potential model for including quantization effects in the simulation of nanoscale nMOSFETs has been developed. This model, based on a realistic description of the function representing the non zero-size of the electron wave packet, has been used in a Monte-Carlo simulator for bulk, single gate SOI and double-gate SOI devices. In the case of SOI capacitors, the electron density has been computed for a large range of effective field (between 0.1 MV/cm and 1 MV/cm) and for various silicon film thicknesses (between 5 nm and 20 nm). A good agreement with the Schroedinger-Poisson results is obtained both on the total inversion charge and on the electron density profiles. The ability of an Effective Potential approach to accurately reproduce electrostatic quantum confinement effects is clearly demonstrated.

  19. Blind identification of nonlinear models with non-Gaussian inputs

    NASA Astrophysics Data System (ADS)

    Prakriya, Shankar; Pasupathy, Subbarayan; Hatzinakos, Dimitrios

    1995-12-01

    Some methods are proposed for the blind identification of finite-order discrete-time nonlinear models with non-Gaussian circular inputs. The nonlinear models consist of two finite memory linear time invariant (LTI) filters separated by a zero-memory nonlinearity (ZMNL) of the polynomial type (the LTI-ZMNL-LTI models). The linear subsystems are allowed to be of non-minimum phase (NMP). The methods base their estimates of the impulse responses on slices of the N plus 1th order polyspectra of the output sequence. It is shown that the identification of LTI-ZMNL systems requires only a 1-D moment or polyspectral slice. The coefficients of the ZMNL are not estimated, and need not be known. The order of the nonlinearity can, in theory, be estimated from the received signal. These methods possess several noise and interference suppression characteristics, and have applications in modeling nonlinearly amplified QAM/QPSK signals in digital satellite and microwave communications.

  20. GMD Coupling to Power Systems and Disturbance Mitigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivera, Michael Kelly; Bent, Russell Whitford

    Presentation includes slides on Geomagnetic Disturbance: Ground Fields; Geomagnetic Disturbance: Coupling to Bulk Electric System; Geomagnetic Disturbance: Transformers; GMD Assessment Workflow (TPL-007-1); FERC order 830; Goals; SuperMag (1 min data) Nov. 20-21, 2003 Storm (DST = -422); Spherical Harmonics; Spherical Harmonics Nov. 20-21, 2003 Storm (DST = -422); DST vs HN0,0; Fluctuations vs. DST; Fluctuations; Conclusions and Next Steps; GMD Assessment Workflow (TPL-007-1); EMP E3 Coupling to Texas 2000 Bus Model; E3 Coupling Comparison (total GIC) Varying Ground Zero; E3 Coupling Comparison (total MVAR) Varying Ground Zero; E3 Coupling Comparison (GIC) at Peak Ground Zero; E3 Coupling Comparison (GIC) atmore » Peak Ground Zero; and Conclusion.« less

  1. Effect of solution non-ideality on erythrocyte volume regulation.

    PubMed

    Levin, R L; Cravalho, E G; Huggins, C E

    1977-03-01

    A non-ideal, hydrated, non-dilute pseudo-binary salt-protein-water solution model of the erythrocyte intracellular solution is presented to describe the osmotic behavior of human erythrocytes. Existing experimental activity data for salts and proteins in aqueous solutions are used to formulate van Laar type expressions for the solvent and solute activity coefficients. Reasonable estimates can therefore be made of the non-ideality of the erythrocyte intracellular solution over a wide range of osmolalities. Solution non-ideality is shown to affect significantly the degree of solute polarization within the erythrocyte intracellular solution during freezing. However, the non-ideality has very little effect upon the amount of water retained within erythrocytes cooled at sub-zero temperatures.

  2. Frequency distribution of Echinococcus multilocularis and other helminths of foxes in Kyrgyzstan

    PubMed Central

    I., Ziadinov; P., Deplazes; A., Mathis; B., Mutunova; K., Abdykerimov; R., Nurgaziev; P.R, Torgerson

    2010-01-01

    Echinococcosis is a major emerging zoonosis in central Asia. A study of the helminth fauna of foxes from Naryn Oblast in central Kyrgyzstan was undertaken to investigate the abundance of Echinococcus multilocularis in a district where a high prevalence of this parasite had previously been detected in dogs. A total of 151 foxes (Vulpes vulpes) were investigated in a necropsy study. Of these 96 (64%) were infected with E. multilocularis with a mean abundance of 8669 parasites per fox. This indicates that red foxes are a major definitive host of E. multilocularis in this country. This also demonstrates that the abundance and prevalence of E. multilocularis in the natural definitive host are likely to be high in geographical regions where there is a concomitant high prevalence in alternative definitive hosts such as dogs. In addition Mesocestoides spp., Dipylidium caninum, Taenia spp., Toxocara canis, Toxascaris leonina, Capillaria and Acanthocephala spp. were found in 99 (66%), 50 (33%), 48 (32%), 46 (30%), 9 (6%), 34 (23%) and 2 (1%) of foxes, respectively. The prevalence but not the abundance of E. multilocularis decreased with age. The abundance of Dipylidium caninum also decreased with age. The frequency distribution of E. multilocularis and Mesocestoides spp. followed a zero inflated negative binomial distribution, whilst all other helminths had a negative binomial distribution. This demonstrates that the frequency distribution of positive counts and not just the frequency of zeros in the data set can determine if a zero inflated or non-zero inflated model is more appropriate. This is because the prevalences of E. multolocularis and Mesocestoides spp. were the highest (and hence had fewest zero counts) yet the parasite distribution nevertheless gave a better fit to the zero inflated models. PMID:20434845

  3. Unbounded number of channel uses may be required to detect quantum capacity.

    PubMed

    Cubitt, Toby; Elkouss, David; Matthews, William; Ozols, Maris; Pérez-García, David; Strelchuk, Sergii

    2015-03-31

    Transmitting data reliably over noisy communication channels is one of the most important applications of information theory, and is well understood for channels modelled by classical physics. However, when quantum effects are involved, we do not know how to compute channel capacities. This is because the formula for the quantum capacity involves maximizing the coherent information over an unbounded number of channel uses. In fact, entanglement across channel uses can even increase the coherent information from zero to non-zero. Here we study the number of channel uses necessary to detect positive coherent information. In all previous known examples, two channel uses already sufficed. It might be that only a finite number of channel uses is always sufficient. We show that this is not the case: for any number of uses, there are channels for which the coherent information is zero, but which nonetheless have capacity.

  4. Frequency Preference Response to Oscillatory Inputs in Two-dimensional Neural Models: A Geometric Approach to Subthreshold Amplitude and Phase Resonance.

    PubMed

    Rotstein, Horacio G

    2014-01-01

    We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.

  5. Switchable and non-switchable zero backscattering of dielectric nano-resonators

    DOE PAGES

    Wang, Feng; Wei, Qi -Huo; Htoon, Han

    2015-02-27

    Previous studies have shown that two-dimensional (2D) arrays of high-permittivity dielectric nanoparticles are capable of fully suppressing backward light scattering when the resonant frequencies of electrical and magnetic dipolar modes are coincident. In this paper, we numerically demonstrate that the zero-backscattering of 2D Si nanocuboid arrays can be engineered to be switchable or non-switchable in response to a variation in the environmental refractive index. For each cuboid width/length, there exist certain cuboid heights and orthogonal periodicity ratio for which the electrical and magnetic resonances exhibit similar spectra widths and equivalent sensitivities to the environmental index changes, so that the zero-backscatteringmore » is non-switchable upon environmental change. For some other cuboid heights and certain anisotropic periodicity ratios, the electric and magnetic modes exhibit different sensitivities to environmental index changes, making the zero-backscattering sensitive to environmental changes. We also show that by using two different types of nano-resonators in the unit cell, Fano resonances can be introduced to greatly enhance the switching sensitivity of zero-backscattering.« less

  6. Analytic model for ultrasound energy receivers and their optimal electric loads

    NASA Astrophysics Data System (ADS)

    Gorostiaga, M.; Wapler, M. C.; Wallrabe, U.

    2017-08-01

    In this paper, we present an analytic model for thickness resonating plate ultrasound energy receivers, which we have derived from the piezoelectric and the wave equations and, in which we have included dielectric, viscosity and acoustic attenuation losses. Afterwards, we explore the optimal electric load predictions by the zero reflection and power maximization approaches present in the literature with different acoustic boundary conditions, and discuss their limitations. To validate our model, we compared our expressions with the KLM model solved numerically with very good agreement. Finally, we discuss the differences between the zero reflection and power maximization optimal electric loads, which start to differ as losses in the receiver increase.

  7. Negative inductance circuits for metamaterial bandwidth enhancement

    NASA Astrophysics Data System (ADS)

    Avignon-Meseldzija, Emilie; Lepetit, Thomas; Ferreira, Pietro Maris; Boust, Fabrice

    2017-12-01

    Passive metamaterials have yet to be translated into applications on a large scale due in large part to their limited bandwidth. To overcome this limitation many authors have suggested coupling metamaterials to non-Foster circuits. However, up to now, the number of convincing demonstrations based on non-Foster metamaterials has been very limited. This paper intends to clarify why progress has been so slow, i.e., the fundamental difficulty in making a truly broadband and efficient non-Foster metamaterial. To this end, we consider two families of metamaterials, namely Artificial Magnetic Media and Artificial Magnetic Conductors. In both cases, it turns out that bandwidth enhancement requires negative inductance with almost zero resistance. To estimate bandwidth enhancement with actual non-Foster circuits, we consider two classes of such circuits, namely Linvill and gyrator. The issue of stability being critical, both metamaterial families are studied with equivalent circuits that include advanced models of these non-Foster circuits. Conclusions are different for Artificial Magnetic Media coupled to Linvill circuits and Artificial Magnetic Conductors coupled to gyrator circuits. In the first case, requirements for bandwidth enhancement and stability are very hard to meet simultaneously whereas, in the second case, an adjustment of the transistor gain does significantly increase bandwidth.

  8. Yarkovsky-O'Keefe-Radzievskii-Paddack effect on tumbling objects

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Rożek, A.; Vokrouhlický, D.

    2011-11-01

    A semi-analytical model of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect on an asteroid spin in a non-principal axis rotation state is developed. The model describes the spin-state evolution in Deprit-Elipe variables, first-order averaged with respect to rotation and Keplerian orbital motion. Assuming zero conductivity, the YORP torque is represented by spherical harmonic series with vectorial coefficients, allowing us to use any degree and order of approximation. Within the quadrupole approximation of the illumination function we find the same first integrals involving rotational momentum, obliquity and dynamical inertia that were obtained by Cicaló & Scheeres. The integrals do not exist when higher degree terms of the illumination function are included, and then the asymptotic states known from Vokrouhlický et al. appear. This resolves an apparent contradiction between earlier results. Averaged equations of motion admit stable and unstable limit cycle solutions that were not previously detected. Non-averaged numerical integration by the Taylor series method for an exemplary shape of 3103 Eger is in good agreement with the semi-analytical theory.

  9. Comparing different Ultraviolet Imaging Spectrograph (UVIS) occultation observations using modeling of water vapor jets

    NASA Astrophysics Data System (ADS)

    Portyankina, Ganna; Esposito, Larry W.; Hansen, Candice; Aye, Klaus-Michael

    2016-10-01

    Motivation: On March 11, 2016 the Cassini UVIS observed its 6th star occultation by Enceladus' plume. This observation was aimed to determine variability in the total gas flux from the Enceladus' southern polar region. The analysis of the received data suggests that the total gas flux is moderately increased comparing to the average gas flux observed by UVIS from 2005 to 2011 [1]. However, UVIS detected variability in individual jets. In particular, Baghdad 1 is more collimated in 2016 than in 2005, meaning its gas escapes at higher velocity.Model and fits: We use 3D DSMC model for water vapor jets to compare different UVIS occultation observations from 2005 to 2016. The model traces test articles from jets' sources [2] into space and results in coordinates and velocities for a set of test particles. We convert particle positions into the particle number density and integrate along UVIS line of sight (LoS) for each time step of the UVIS observation using precise observational geometry derived from SPICE [3]. We integrate all jets that are crossed by the LoS and perform constrained least-squares fit of resulting modeled opacities to the observed data to solved for relative strengths of jets. The geometry of each occultation is specific, for example, during solar occultation in 2010 UVIS LoS was almost parallel to tiger stripes, which made it possible to distinguish jets venting from different tiger stripes. In 2011 Eps Orionis occultation LoS was perpendicular to tiger stripes and thus many of the jets were geometrically overlapping. Solar occultation provided us with the largest inventory of active jets - our model fit detects at least 43 non-zero jet contributions. Stellar occultations generally have lower temporal resolution and observe only a sub-set of these jets: 2011 Eps Orionis needs minimum 25 non-zero jets to fit UVIS data. We will discuss different occultations and models fits, including the most recent Epsilon Orionis occultation of 2016.[1] Hansen et al., DPS 48, 2016 [2] Porco et al. 2014 The Astronomical Journal 148, 4 [3] Acton, C.H., 1996 PSS 44, 65-70

  10. FAST TRACK COMMUNICATION Single-charge rotating black holes in four-dimensional gauged supergravity

    NASA Astrophysics Data System (ADS)

    Chow, David D. K.

    2011-02-01

    We consider four-dimensional U(1)4 gauged supergravity, and obtain asymptotically AdS4, non-extremal, charged, rotating black holes with one non-zero U(1) charge. The thermodynamic quantities are computed. We obtain a generalization that includes a NUT parameter. The general solution has a discrete symmetry involving inversion of the rotation parameter, and has a string frame metric that admits a rank-2 Killing-Stäckel tensor.

  11. Broken Ergodicity in Ideal, Homogeneous, Incompressible Turbulence

    NASA Technical Reports Server (NTRS)

    Morin, Lee; Shebalin, John; Fu, Terry; Nguyen, Phu; Shum, Victor

    2010-01-01

    We discuss the statistical mechanics of numerical models of ideal homogeneous, incompressible turbulence and their relevance for dissipative fluids and magnetofluids. These numerical models are based on Fourier series and the relevant statistical theory predicts that Fourier coefficients of fluid velocity and magnetic fields (if present) are zero-mean random variables. However, numerical simulations clearly show that certain coefficients have a non-zero mean value that can be very large compared to the associated standard deviation. We explain this phenomena in terms of broken ergodicity', which is defined to occur when dynamical behavior does not match ensemble predictions on very long time-scales. We review the theoretical basis of broken ergodicity, apply it to 2-D and 3-D fluid and magnetohydrodynamic simulations of homogeneous turbulence, and show new results from simulations using GPU (graphical processing unit) computers.

  12. Application of Emulsified Zero-Valent Iron to Marine Environments

    NASA Technical Reports Server (NTRS)

    Quinn, Jacqueline W.; Brooks, Kathleen B.; Geiger, Cherie L.; Clausen, Christian A.; Milum, Kristen M.

    2006-01-01

    Contamination of marine waters and sediments with heavy metals and dense non-aqueous phase liquids (DNAPLs) including chlorinated solvents, pesticides and PCBs pose ecological and human health risks through the potential of the contaminant to bioaccumulate in fish, shellfish and avian populations. The contaminants enter marine environments through improper disposal techniques and storm water runoff. Current remediation technologies for application to marine environments include costly dredging and off-site treatment of the contaminated media. Emulsified zero-valent iron (EZVI) has been proven to effectively degrade dissolved-phase and DNAPL-phase contaminants in freshwater environments on both the laboratory and field-scale level. Emulsified Zero-Valent Metal (EZVM) using metals such as iron and/or magnesium have been shown in the laboratory and on the bench scale to be effective at removing metals contamination in freshwater environments. The application to marine environments, however, is only just being explored. This paper discusses. the potential use of EZVI or EZVM in brackish and saltwater environments, with supporting laboratory data detailing its effectiveness on trichloroethylene, lead, copper, nickel and cadmium.

  13. Comparison of zero-profile anchored spacer versus plate-cage construct in treatment of cervical spondylosis with regard to clinical outcomes and incidence of major complications: a meta-analysis

    PubMed Central

    Liu, Weijun; Hu, Ling; Wang, Junwen; Liu, Ming; Wang, Xiaomei

    2015-01-01

    Purpose Meta-analysis was conducted to evaluate whether zero-profile anchored spacer (Zero-P) could reduce complication rates, while maintaining similar clinical outcomes compared to plate-cage construct (PCC) in the treatment of cervical spondylosis. Methods All prospective and retrospective comparative studies published up to May 2015 that compared the clinical outcomes of Zero-P versus PCC in the treatment of cervical spondylosis were acquired by a comprehensive search in PubMed and EMBASE. Exclusion criteria were non-English studies, noncomparative studies, hybrid surgeries, revision surgeries, and surgeries with less than a 12-month follow-up period. The main end points including Japanese Orthopedic Association (JOA) and Neck Disability Index (NDI) scores, cervical lordosis, fusion rate, subsidence, and dysphagia were analyzed. All studies were analyzed with the RevMan 5.2.0 software. Publication biases of main results were examined using Stata 12.0. Results A total of 12 studies were included in the meta-analysis. No statistical difference was observed with regard to preoperative or postoperative JOA and NDI scores, cervical lordosis, and fusion rate. The Zero-P group had a higher subsidence rate than the PCC group (P<0.05, risk difference =0.13, 95% confidence interval [CI] 0.00–0.26). However, the Zero-P group had a significantly lower postoperative dysphagia rate than the PCC group within the first 2 weeks (P<0.05, odds ratio [OR] =0.64, 95% CI 0.45–0.91), at the 6th month [P<0.05, OR =0.20, 95% CI 0.04–0.90], and at the final follow-up time [P<0.05, OR =0.13, 95% CI 0.04–0.45]. Conclusion Our meta-analysis suggested that surgical treatments of single or multiple levels of cervical spondylosis using Zero-P and PCC were similar in terms of JOA score, NDI score, cervical lordosis, and fusion rate. Although the Zero-P group had a higher subsidence rate than the PCC group, Zero-P had a lower postoperative dysphagia rate and might have a lower adjacent-level ossification rate. PMID:26445543

  14. Comparison of zero-profile anchored spacer versus plate-cage construct in treatment of cervical spondylosis with regard to clinical outcomes and incidence of major complications: a meta-analysis.

    PubMed

    Liu, Weijun; Hu, Ling; Wang, Junwen; Liu, Ming; Wang, Xiaomei

    2015-01-01

    Meta-analysis was conducted to evaluate whether zero-profile anchored spacer (Zero-P) could reduce complication rates, while maintaining similar clinical outcomes compared to plate-cage construct (PCC) in the treatment of cervical spondylosis. All prospective and retrospective comparative studies published up to May 2015 that compared the clinical outcomes of Zero-P versus PCC in the treatment of cervical spondylosis were acquired by a comprehensive search in PubMed and EMBASE. Exclusion criteria were non-English studies, noncomparative studies, hybrid surgeries, revision surgeries, and surgeries with less than a 12-month follow-up period. The main end points including Japanese Orthopedic Association (JOA) and Neck Disability Index (NDI) scores, cervical lordosis, fusion rate, subsidence, and dysphagia were analyzed. All studies were analyzed with the RevMan 5.2.0 software. Publication biases of main results were examined using Stata 12.0. A total of 12 studies were included in the meta-analysis. No statistical difference was observed with regard to preoperative or postoperative JOA and NDI scores, cervical lordosis, and fusion rate. The Zero-P group had a higher subsidence rate than the PCC group (P<0.05, risk difference =0.13, 95% confidence interval [CI] 0.00-0.26). However, the Zero-P group had a significantly lower postoperative dysphagia rate than the PCC group within the first 2 weeks (P<0.05, odds ratio [OR] =0.64, 95% CI 0.45-0.91), at the 6th month [P<0.05, OR =0.20, 95% CI 0.04-0.90], and at the final follow-up time [P<0.05, OR =0.13, 95% CI 0.04-0.45]. Our meta-analysis suggested that surgical treatments of single or multiple levels of cervical spondylosis using Zero-P and PCC were similar in terms of JOA score, NDI score, cervical lordosis, and fusion rate. Although the Zero-P group had a higher subsidence rate than the PCC group, Zero-P had a lower postoperative dysphagia rate and might have a lower adjacent-level ossification rate.

  15. Joint Test Project Report of Combat Air Support Target Acquisition Program. SEEKVAL. Project IA2. Direct Visual Imagery Experiments.

    DTIC Science & Technology

    1975-01-01

    Mission Zero Briefing Information ... ....... 1-A-8 Mission Zero Preflight Taped Coiments . . . 1-A-lO Mission Zero Inflight Events and Commentary . l-A...acquisitions between MAR and the target and zero range for non-acquisitions. AA 1 ... , ; "~...,, X0 ..o", xix S w...target from 35,000 feet to zero feet at nadir. If the inter-target interval was less than 35,000 feet, the device started counting on the new target

  16. Installation noise measurements of model SR and CR propellers

    NASA Technical Reports Server (NTRS)

    Block, P. J. W.

    1984-01-01

    Noise measurements on a 0.1 scale SR-2 propeller in a single and counter rotation mode, in a pusher and tractor configuration, and operating at non-zero angles of attack are summarized. A measurement scheme which permitted 143 measurements of each of these configurations in the Langley 4- by 7-meter low speed tunnel is also described.

  17. Duels with Continuous Firing,

    DTIC Science & Technology

    A game-theoretic model is proposed for the generalization of a discrete-fire silent duel to a silent duel with continuous firing. This zero-sum two...person game is solved in the symmetric case. It is shown that pure optimal strategies exist and hence also solve a noisy duel with continuous firing. A solution for the general non-symmetric duel is conjectured. (Author)

  18. Quasi-exact solvability and entropies of the one-dimensional regularised Calogero model

    NASA Astrophysics Data System (ADS)

    Pont, Federico M.; Osenda, Omar; Serra, Pablo

    2018-05-01

    The Calogero model can be regularised through the introduction of a cutoff parameter which removes the divergence in the interaction term. In this work we show that the one-dimensional two-particle regularised Calogero model is quasi-exactly solvable and that for certain values of the Hamiltonian parameters the eigenfunctions can be written in terms of Heun’s confluent polynomials. These eigenfunctions are such that the reduced density matrix of the two-particle density operator can be obtained exactly as well as its entanglement spectrum. We found that the number of non-zero eigenvalues of the reduced density matrix is finite in these cases. The limits for the cutoff distance going to zero (Calogero) and infinity are analysed and all the previously obtained results for the Calogero model are reproduced. Once the exact eigenfunctions are obtained, the exact von Neumann and Rényi entanglement entropies are studied to characterise the physical traits of the model. The quasi-exactly solvable character of the model is assessed studying the numerically calculated Rényi entropy and entanglement spectrum for the whole parameter space.

  19. Non-perturbative reheating and Nnaturalness

    NASA Astrophysics Data System (ADS)

    Hardy, Edward

    2017-11-01

    We study models in which reheating happens only through non-perturbative processes. The energy transferred can be exponentially suppressed unless the inflaton is coupled to a particle with a parametrically small mass. Additionally, in some models a light scalar with a negative mass squared parameter leads to much more efficient reheating than one with a positive mass squared of the same magnitude. If a theory contains many sectors similar to the Standard Model coupled to the inflaton via their Higgses, such dynamics can realise the Nnaturalness solution to the hierarchy problem. A sector containing a light Higgs with a non-zero vacuum expectation value is dominantly reheated and there is little energy transferred to the other sectors, consistent with cosmological constraints. The inflaton must decouple from other particles and have a flat potential at large field values, in which case the visible sector UV cutoff can be raised to 10 TeV in a simple model.

  20. Spatiotemporal hurdle models for zero-inflated count data: Exploring trends in emergency department visits.

    PubMed

    Neelon, Brian; Chang, Howard H; Ling, Qiang; Hastings, Nicole S

    2016-12-01

    Motivated by a study exploring spatiotemporal trends in emergency department use, we develop a class of two-part hurdle models for the analysis of zero-inflated areal count data. The models consist of two components-one for the probability of any emergency department use and one for the number of emergency department visits given use. Through a hierarchical structure, the models incorporate both patient- and region-level predictors, as well as spatially and temporally correlated random effects for each model component. The random effects are assigned multivariate conditionally autoregressive priors, which induce dependence between the components and provide spatial and temporal smoothing across adjacent spatial units and time periods, resulting in improved inferences. To accommodate potential overdispersion, we consider a range of parametric specifications for the positive counts, including truncated negative binomial and generalized Poisson distributions. We adopt a Bayesian inferential approach, and posterior computation is handled conveniently within standard Bayesian software. Our results indicate that the negative binomial and generalized Poisson hurdle models vastly outperform the Poisson hurdle model, demonstrating that overdispersed hurdle models provide a useful approach to analyzing zero-inflated spatiotemporal data. © The Author(s) 2014.

  1. A simplified model to evaluate the effect of fluid rheology on non-Newtonian flow in variable aperture fractures

    NASA Astrophysics Data System (ADS)

    Felisa, Giada; Ciriello, Valentina; Longo, Sandro; Di Federico, Vittorio

    2017-04-01

    Modeling of non-Newtonian flow in fractured media is essential in hydraulic fracturing operations, largely used for optimal exploitation of oil, gas and thermal reservoirs. Complex fluids interact with pre-existing rock fractures also during drilling operations, enhanced oil recovery, environmental remediation, and other natural phenomena such as magma and sand intrusions, and mud volcanoes. A first step in the modeling effort is a detailed understanding of flow in a single fracture, as the fracture aperture is typically spatially variable. A large bibliography exists on Newtonian flow in single, variable aperture fractures. Ultimately, stochastic modeling of aperture variability at the single fracture scale leads to determination of the flowrate under a given pressure gradient as a function of the parameters describing the variability of the aperture field and the fluid rheological behaviour. From the flowrate, a flow, or 'hydraulic', aperture can then be derived. The equivalent flow aperture for non-Newtonian fluids of power-law nature in single, variable aperture fractures has been obtained in the past both for deterministic and stochastic variations. Detailed numerical modeling of power-law fluid flow in a variable aperture fracture demonstrated that pronounced channelization effects are associated to a nonlinear fluid rheology. The availability of an equivalent flow aperture as a function of the parameters describing the fluid rheology and the aperture variability is enticing, as it allows taking their interaction into account when modeling flow in fracture networks at a larger scale. A relevant issue in non-Newtonian fracture flow is the rheological nature of the fluid. The constitutive model routinely used for hydro-fracturing modeling is the simple, two-parameter power-law. Yet this model does not characterize real fluids at low and high shear rates, as it implies, for shear-thinning fluids, an apparent viscosity which becomes unbounded for zero shear rate and tends to zero for infinite shear rate. On the contrary, the four-parameter Carreau constitutive equation includes asymptotic values of the apparent viscosity at those limits; in turn, the Carreau rheological equation is well approximated by the more tractable truncated power-law model. Results for flow of such fluids between parallel walls are already available. This study extends the adoption of the truncated power-law model to variable aperture fractures, with the aim of understanding the joint influence of rheology and aperture spatial variability. The aperture variation, modeled within a stochastic or deterministic framework, is taken to be one-dimensional and perpendicular to the flow direction; for stochastic modeling, the influence of different distribution functions is examined. Results are then compared with those obtained for pure power-law fluids for different combinations of model parameters. It is seen that the adoption of the pure power law model leads to significant overestimation of the flowrate with respect to the truncated model, more so for large external pressure gradient and/or aperture variability.

  2. IRT-ZIP Modeling for Multivariate Zero-Inflated Count Data

    ERIC Educational Resources Information Center

    Wang, Lijuan

    2010-01-01

    This study introduces an item response theory-zero-inflated Poisson (IRT-ZIP) model to investigate psychometric properties of multiple items and predict individuals' latent trait scores for multivariate zero-inflated count data. In the model, two link functions are used to capture two processes of the zero-inflated count data. Item parameters are…

  3. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  4. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  5. Two-Point Resistance of a Non-Regular Cylindrical Network with a Zero Resistor Axis and Two Arbitrary Boundaries

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-Zhong

    2017-03-01

    We study a problem of two-point resistance in a non-regular m × n cylindrical network with a zero resistor axis and two arbitrary boundaries by means of the Recursion-Transform method. This is a new problem never solved before, the Green’s function technique and the Laplacian matrix approach are invalid in this case. A disordered network with arbitrary boundaries is a basic model in many physical systems or real world systems, however looking for the exact calculation of the resistance of a binary resistor network is important but difficult in the case of the arbitrary boundaries, the boundary is like a wall or trap which affects the behavior of finite network. In this paper we obtain a general resistance formula of a non-regular m × n cylindrical network, which is composed of a single summation. Further, the current distribution is given explicitly as a byproduct of the method. As applications, several interesting results are derived by making special cases from the general formula. Supported by the Natural Science Foundation of Jiangsu Province under Grant No. BK20161278

  6. Some findings on zero-inflated and hurdle poisson models for disease mapping.

    PubMed

    Corpas-Burgos, Francisca; García-Donato, Gonzalo; Martinez-Beneito, Miguel A

    2018-05-27

    Zero excess in the study of geographically referenced mortality data sets has been the focus of considerable attention in the literature, with zero-inflation being the most common procedure to handle this lack of fit. Although hurdle models have also been used in disease mapping studies, their use is more rare. We show in this paper that models using particular treatments of zero excesses are often required for achieving appropriate fits in regular mortality studies since, otherwise, geographical units with low expected counts are oversmoothed. However, as also shown, an indiscriminate treatment of zero excess may be unnecessary and has a problematic implementation. In this regard, we find that naive zero-inflation and hurdle models, without an explicit modeling of the probabilities of zeroes, do not fix zero excesses problems well enough and are clearly unsatisfactory. Results sharply suggest the need for an explicit modeling of the probabilities that should vary across areal units. Unfortunately, these more flexible modeling strategies can easily lead to improper posterior distributions as we prove in several theoretical results. Those procedures have been repeatedly used in the disease mapping literature, and one should bear these issues in mind in order to propose valid models. We finally propose several valid modeling alternatives according to the results mentioned that are suitable for fitting zero excesses. We show that those proposals fix zero excesses problems and correct the mentioned oversmoothing of risks in low populated units depicting geographic patterns more suited to the data. Copyright © 2018 John Wiley & Sons, Ltd.

  7. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  8. Marginal regression models for clustered count data based on zero-inflated Conway-Maxwell-Poisson distribution with applications.

    PubMed

    Choo-Wosoba, Hyoyoung; Levy, Steven M; Datta, Somnath

    2016-06-01

    Community water fluoridation is an important public health measure to prevent dental caries, but it continues to be somewhat controversial. The Iowa Fluoride Study (IFS) is a longitudinal study on a cohort of Iowa children that began in 1991. The main purposes of this study (http://www.dentistry.uiowa.edu/preventive-fluoride-study) were to quantify fluoride exposures from both dietary and nondietary sources and to associate longitudinal fluoride exposures with dental fluorosis (spots on teeth) and dental caries (cavities). We analyze a subset of the IFS data by a marginal regression model with a zero-inflated version of the Conway-Maxwell-Poisson distribution for count data exhibiting excessive zeros and a wide range of dispersion patterns. In general, we introduce two estimation methods for fitting a ZICMP marginal regression model. Finite sample behaviors of the estimators and the resulting confidence intervals are studied using extensive simulation studies. We apply our methodologies to the dental caries data. Our novel modeling incorporating zero inflation, clustering, and overdispersion sheds some new light on the effect of community water fluoridation and other factors. We also include a second application of our methodology to a genomic (next-generation sequencing) dataset that exhibits underdispersion. © 2015, The International Biometric Society.

  9. 75 FR 70814 - Securities Held in Treasury Direct

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ... indebtedness.) * * * * * Zero-Percent Certificate of Indebtedness is a one-day, non- interest-bearing security... purchase a zero-percent certificate of indebtedness through one or more of the following four methods: (1... employer or a financial institution, to credit funds on a recurring basis to purchase a payroll zero...

  10. 39 CFR 121.4 - Package Services.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Center Facility (SCF) turnaround Package Services mail accepted at the origin SCF before the day zero... origin before the day-zero Critical Entry Time is 3 days, for each remaining (non-intra-SCF) 3-digit ZIP... intra-Network Distribution Center (NDC) Package Services mail accepted at origin before the day-zero...

  11. 39 CFR 121.4 - Package Services.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Center Facility (SCF) turnaround Package Services mail accepted at the origin SCF before the day zero... origin before the day-zero Critical Entry Time is 3 days, for each remaining (non-intra-SCF) 3-digit ZIP... intra-Network Distribution Center (NDC) Package Services mail accepted at origin before the day-zero...

  12. The Sivers effect and the Single Spin Asymmetry A_N in p(transv. pol.) p --> h X processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmino, Mauro; Boglione, Mariaelena; D'Alesio, Umberto

    2013-09-01

    The single spin asymmetry A_N, for large P_T single inclusive particle production in p(transv. pol.) p collisions, is considered within a generalised parton model and a transverse momentum dependent factorisation scheme. The focus is on the Sivers effect and the study of its potential contribution to A_N, based on a careful analysis of the Sivers functions extracted from azimuthal asymmetries in semi-inclusive deep inelastic scattering processes. It is found that such Sivers functions could explain most features of the A_N data, including some recent STAR results which show the persistence of a non zero A_N up to surprisingly large P_Tmore » values.« less

  13. YORP on tumbling asteroids

    NASA Astrophysics Data System (ADS)

    Rozek, A.; Breiter, S.; Vokrouhlicky, D.

    2011-10-01

    A semi-analytical model of the Yarkovsky-O'Keefe- Radzievskii-Paddack (YORP) effect on an asteroid spin in non principal axis rotation state is presented. Assuming zero conductivity, the YORP torque is represented by spherical harmonics series with vector coefficients, allowing to use any degree and order of approximation. Within the quadrupole approximation of the illumination function we find the same first integrals involving rotational momentum, obliquity and dynamical inertia that were obtained by Cicaló and Scheeres [1]. The integrals do not exist when higher degree terms of illumination function are included and then the asymptotic states known from Vokrouhlický et al. [2] appear. This resolves an apparent contradiction between earlier results. Averaged equations of motion admit stable and unstable limit cycle solutions that were not detected previously.

  14. Fate and Transport of Nanoparticles in Porous Media: A Numerical Study

    NASA Astrophysics Data System (ADS)

    Taghavy, Amir

    Understanding the transport characteristics of NPs in natural soil systems is essential to revealing their potential impact on the food chain and groundwater. In addition, many nanotechnology-based remedial measures require effective transport of NPs through soil, which necessitates accurate understanding of their transport and retention behavior. Based upon the conceptual knowledge of environmental behavior of NPs, mathematical models can be developed to represent the coupling of processes that govern the fate of NPs in subsurface, serving as effective tools for risk assessment and/or design of remedial strategies. This work presents an innovative hybrid Eulerian-Lagrangian modeling technique for simulating the simultaneous reactive transport of nanoparticles (NPs) and dissolved constituents in porous media. Governing mechanisms considered in the conceptual model include particle-soil grain, particle-particle, particle-dissolved constituents, and particle- oil/water interface interactions. The main advantage of this technique, compared to conventional Eulerian models, lies in its ability to address non-uniformity in physicochemical particle characteristics. The developed numerical simulator was applied to investigate the fate and transport of NPs in a number of practical problems relevant to the subsurface environment. These problems included: (1) reductive dechlorination of chlorinated solvents by zero-valent iron nanoparticles (nZVI) in dense non-aqueous phase liquid (DNAPL) source zones; (2) reactive transport of dissolving silver nanoparticles (nAg) and the dissolved silver ions; (3) particle-particle interactions and their effects on the particle-soil grain interactions; and (4) influence of particle-oil/water interface interactions on NP transport in porous media.

  15. Simulation and analysis of OOK-to-BPSK format conversion based on gain-transparent SOA used as optical phase-modulator.

    PubMed

    Hong, Wei; Huang, Dexiu; Zhang, Xinliang; Zhu, Guangxi

    2007-12-24

    All-optical on-off keying (OOK) to binary phase-shift keying (BPSK) modulation format conversion based on gain-transparent semiconductor optical amplifier (GT-SOA) is simulated and analyzed, where GT-SOA is used as an all-optical phase-modulator (PM). Numerical simulation of the phase modulation effect of GT-SOA is performed using a wideband dynamic model of GT-SOA and the quality of the BPSK signal is evaluated using the differential-phase-Q factor. Performance improvement by holding light injection is analyzed and non-return-to-zero (NRZ) and return-to-zero (RZ) modulation formats of the OOK signal are considered.

  16. A self-learning camera for the validation of highly variable and pseudorandom patterns

    NASA Astrophysics Data System (ADS)

    Kelley, Michael

    2004-05-01

    Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.

  17. Modeling of a Turbofan Engine with Ice Crystal Ingestion in the NASA Propulsion System Laboratory

    NASA Technical Reports Server (NTRS)

    Veres, Joseph P.; Jorgenson, Philip C. E.; Jones, Scott M.; Nili, Samaun

    2017-01-01

    The main focus of this study is to apply a computational tool for the flow analysis of the turbine engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) at NASA Glenn Research Center. The PSL has been used to test a highly instrumented Honeywell ALF502R-5A (LF11) turbofan engine at simulated altitude operating conditions. Test data analysis with an engine cycle code and a compressor flow code was conducted to determine the values of key icing parameters, that can indicate the risk of ice accretion, which can lead to engine rollback (un-commanded loss of engine thrust). The full engine aerothermodynamic performance was modeled with the Honeywell Customer Deck specifically created for the ALF502R-5A engine. The mean-line compressor flow analysis code, which includes a code that models the state of the ice crystal, was used to model the air flow through the fan-core and low pressure compressor. The results of the compressor flow analyses included calculations of the ice-water flow rate to air flow rate ratio (IWAR), the local static wet bulb temperature, and the particle melt ratio throughout the flow field. It was found that the assumed particle size had a large effect on the particle melt ratio, and on the local wet bulb temperature. In this study the particle size was varied parametrically to produce a non-zero calculated melt ratio in the exit guide vane (EGV) region of the low pressure compressor (LPC) for the data points that experienced a growth of blockage there, and a subsequent engine called rollback (CRB). At data points where the engine experienced a CRB having the lowest wet bulb temperature of 492 degrees Rankine at the EGV trailing edge, the smallest particle size that produced a non-zero melt ratio (between 3 percent - 4 percent) was on the order of 1 micron. This value of melt ratio was utilized as the target for all other subsequent data points analyzed, while the particle size was varied from 1 micron - 9.5 microns to achieve the target melt ratio. For data points that did not experience a CRB which had static wet bulb temperatures in the EGV region below 492 degrees Rankine, a non-zero melt ratio could not be achieved even with a 1 micron ice particle size. The highest value of static wet bulb temperature for data points that experienced engine CRB was 498 degrees Rankine with a particle size of 9.5 microns. Based on this study of the LF11 engine test data, the range of static wet bulb temperature at the EGV exit for engine CRB was in the narrow range of 492 degrees Rankine - 498 degrees Rankine , while the minimum value of IWAR was 0.002. The rate of blockage growth due to ice accretion and boundary layer growth was estimated by scaling from a known blockage growth rate that was determined in a previous study. These results obtained from the LF11 engine analysis formed the basis of a unique “icing wedge.”

  18. The Green's functions for peridynamic non-local diffusion.

    PubMed

    Wang, L J; Xu, J F; Wang, J X

    2016-09-01

    In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.

  19. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  20. Improving xylem hydraulic conductivity measurements by correcting the error caused by passive water uptake.

    PubMed

    Torres-Ruiz, José M; Sperry, John S; Fernández, José E

    2012-10-01

    Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.

  1. Adaptive Sniping for Volatile and Stable Continuous Double Auction Markets

    NASA Astrophysics Data System (ADS)

    Toft, I. E.; Bagnall, A. J.

    This paper introduces a new adaptive sniping agent for the Continuous Double Auction. We begin by analysing the performance of the well known Kaplan sniper in two extremes of market conditions. We generate volatile and stable market conditions using the well known Zero Intelligence-Constrained agent and a new zero-intelligence agent Small Increment (SI). ZI-C agents submit random but profitable bids/offers and cause high volatility in prices and individual trader performance. Our new zero-intelligence agent, SI, makes small random adjustments to the outstanding bid/offer and hence is more cautious than ZI-C. We present results for SI in self-play and then analyse Kaplan in volatile and stable markets. We demonstrate that the non-adaptive Kaplan sniper can be configured to suit either market conditions, but no single configuration is performs well across both market types. We believe that in a dynamic auction environment where current or future market conditions cannot be predicted a viable sniping strategy should adapt its behaviour to suit prevailing market conditions. To this end, we propose the Adaptive Sniper (AS) agent for the CDA. AS traders classify sniping opportunities using a statistical model of market activity and adjust their classification thresholds using a Widrow-Hoff adapted search. Our AS agent requires little configuration, and outperforms the original Kaplan sniper in volatile and stable markets, and in a mixed trader type scenario that includes adaptive strategies from the literature.

  2. The structure and properties of a simple model mixture of amphiphilic molecules and ions at a solid surface

    NASA Astrophysics Data System (ADS)

    Pizio, O.; Sokołowski, S.; Sokołowska, Z.

    2014-05-01

    We investigate microscopic structure, adsorption, and electric properties of a mixture that consists of amphiphilic molecules and charged hard spheres in contact with uncharged or charged solid surfaces. The amphiphilic molecules are modeled as spheres composed of attractive and repulsive parts. The electrolyte component of the mixture is considered in the framework of the restricted primitive model (RPM). The system is studied using a density functional theory that combines fundamental measure theory for hard sphere mixtures, weighted density approach for inhomogeneous charged hard spheres, and a mean-field approximation to describe anisotropic interactions. Our principal focus is in exploring the effects brought by the presence of ions on the distribution of amphiphilic particles at the wall, as well as the effects of amphiphilic molecules on the electric double layer formed at solid surface. In particular, we have found that under certain thermodynamic conditions a long-range translational and orientational order can develop. The presence of amphiphiles produces changes of the shape of the differential capacitance from symmetric or non-symmetric bell-like to camel-like. Moreover, for some systems the value of the potential of the zero charge is non-zero, in contrast to the RPM at a charged surface.

  3. Determinants of The Grade A Embryos in Infertile Women; Zero-Inflated Regression Model.

    PubMed

    Almasi-Hashiani, Amir; Ghaheri, Azadeh; Omani Samani, Reza

    2017-10-01

    In assisted reproductive technology, it is important to choose high quality embryos for embryo transfer. The aim of the present study was to determine the grade A embryo count and factors related to it in infertile women. This historical cohort study included 996 infertile women. The main outcome was the number of grade A embryos. Zero-Inflated Poisson (ZIP) regression and Zero-Inflated Negative Binomial (ZINB) regression were used to model the count data as it contained excessive zeros. Stata software, version 13 (Stata Corp, College Station, TX, USA) was used for all statistical analyses. After adjusting for potential confounders, results from the ZINB model show that for each unit increase in the number 2 pronuclear (2PN) zygotes, we get an increase of 1.45 times as incidence rate ratio (95% confidence interval (CI): 1.23-1.69, P=0.001) in the expected grade A embryo count number, and for each increase in the cleavage day we get a decrease 0.35 times (95% CI: 0.20-0.61, P=0.001) in expected grade A embryo count. There is a significant association between both the number of 2PN zygotes and cleavage day with the number of grade A embryos in both ZINB and ZIP regression models. The estimated coefficients are more plausible than values found in earlier studies using less relevant models. Copyright© by Royan Institute. All rights reserved.

  4. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  5. Physical Modeling Techniques for Missile and Other Protective Structures

    DTIC Science & Technology

    1983-06-29

    uniaxial load only. In general , axial thrust was applied with an: initial eccentricity of zero on the specimen end. Sixteen different combinations of Pa...conditioning electronics and cabling schemes is included. The techniques described generally represent current approaches at the Civil Engineering Research...at T- zero and stopping when a pulse is generated by the pi-ezoelectric disc on arrival of! the detonation wave front. All elapsed time data is stored

  6. Effects of coccidiosis vaccination administered by in ovo injection on Ross 708 broiler performance through 14 days of post-hatch age.

    PubMed

    Sokale, A O; Zhai, W; Pote, L M; Williams, C J; Peebles, E D

    2017-08-01

    Effects of the in ovo injection of a commercial coccidiosis vaccine on various hatching chick quality variables and 14 d post-hatch (dph) oocyst shedding have been previously examined. The current study was designed to examine the performance of Ross 708 broilers during the 14 dph period of oocyst shedding following the application of the coccidiosis vaccine. On each of 7 replicate tray levels of a single-stage incubator, a total of 4 treatment groups was randomly represented, with each treatment group containing 63 eggs. Treatments were administered using a commercial multi-egg injector on d 18.5 of incubation. The treatments included 3 control groups (non-injected, dry-punch, and diluent-injected) and one treatment group (injected with diluent containing Inovocox EM1 vaccine). On d 21 of incubation, 20 chicks from each of the 28 treatment-replicate groups were placed in corresponding wire-floored battery cages. Mortality, feed intake (FI), BW gain (BWG), and feed conversion ratio (FCR) were determined for the zero to 7, 7 to 14, and cumulative zero to 14 dph intervals. There were no significant treatment effects on mortality in any interval or on BW at zero dph. There were significant treatment effects on BW at 7 and 14 dph, on BWG and FI in the zero to 7, 7 to 14, and zero to 14 dph intervals, and on FCR in the 7 to 14 and zero to 14 dph intervals. Although the performance variables of birds belonging to the diluent-injected and vaccine-injected groups were not significantly different, the 14 dph BW, 7 to 14 dph FI, and zero to 14 dph BWG and FI of birds belonging to the vaccine treatment group were significantly higher than those in birds belonging to the non-injected control group. It was concluded that use of the Inovocox EM1 vaccine in commercial diluent has no detrimental effect on the overall post-hatch performance of broilers through 14 dph. © 2017 Poultry Science Association Inc.

  7. Stress studies in EFG

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Experimental work in support of stress studies in high speed silicon sheet growth has been emphasized in this quarter. Creep experiments utilizing four-point bending have been made in the temperature range from 1000 C to 1360 C in CZ silicon as well as on EFG ribbon. A method to measure residual stress over large areas using laser interferometry to map strain distributions under load is under development. A fiber optics sensor to measure ribbon temperature profiles has been constructed and is being tested in a ribbon growth furnace environment. Stress and temperature field modeling work has been directed toward improving various aspects of the finite element computing schemes. Difficulties in computing stress distributions with a very high creep intensity and with non-zero interface stress have been encountered and additional development of the numerical schemes to cope with these problems is required. Temperature field modeling has been extended to include the study of heat transfer effects in the die and meniscus regions.

  8. Three axis electronic flight motion simulator real time control system design and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Zhiyuan; Miao, Zhonghua, E-mail: zhonghua-miao@163.com; Wang, Xiaohua

    2014-12-15

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  9. Three axis electronic flight motion simulator real time control system design and implementation.

    PubMed

    Gao, Zhiyuan; Miao, Zhonghua; Wang, Xuyong; Wang, Xiaohua

    2014-12-01

    A three axis electronic flight motion simulator is reported in this paper including the modelling, the controller design as well as the hardware implementation. This flight motion simulator could be used for inertial navigation test and high precision inertial navigation system with good dynamic and static performances. A real time control system is designed, several control system implementation problems were solved including time unification with parallel port interrupt, high speed finding-zero method of rotary inductosyn, zero-crossing management with continuous rotary, etc. Tests were carried out to show the effectiveness of the proposed real time control system.

  10. The place-value of a digit in multi-digit numbers is processed automatically.

    PubMed

    Kallai, Arava Y; Tzelgov, Joseph

    2012-09-01

    The automatic processing of the place-value of digits in a multi-digit number was investigated in 4 experiments. Experiment 1 and two control experiments employed a numerical comparison task in which the place-value of a non-zero digit was varied in a string composed of zeros. Experiment 2 employed a physical comparison task in which strings of digits varied in their physical sizes. In both types of tasks, the place-value of the non-zero digit in the string was irrelevant to the task performed. Interference of the place-value information was found in both tasks. When the non-zero digit occupied a lower place-value, it was recognized slower as a larger digit or as written in a larger font size. We concluded that place-value in a multi-digit number is processed automatically. These results support the notion of a decomposed representation of multi-digit numbers in memory. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  11. On the Uniqueness and Consistency of Scattering Amplitudes

    NASA Astrophysics Data System (ADS)

    Rodina, Laurentiu

    In this dissertation, we study constraints imposed by locality, unitarity, gauge invariance, the Adler zero, and constructability (scaling under BCFW shifts). In the first part we study scattering amplitudes as the unique mathematical objects which can satisfy various combinations of such principles. In all cases we find that locality and unitarity may be derived from gauge invariance (for Yang-Mills and General Relativity) or from the Adler zero (for the non-linear sigma model and the Dirac-Born-Infeld model), together with mild assumptions on the singularity structure and mass dimension. We also conjecture that constructability and locality together imply gauge invariance, hence also unitarity. All claims are proved through a soft expansion, and in the process we end re-deriving the well-known leading soft theorems for all four theories. Unlike other proofs of these theorems, we do not assume any form of factorization (unitarity). In the second part we show how tensions arising between gauge invariance (as encoded by spinor helicity variables in four dimensions), locality, unitarity and constructability give rise to various physical properties. These include high-spin no-go theorems, the equivalence principle, and the emergence of supersymmetry from spin 3/2 particles. We also complete the fully on-shell constructability proof of gravity amplitudes, by showing that the improved "bonus'' behavior of gravity under BCFW shifts is a simple consequence of Bose symmetry.

  12. Is place-value processing in four-digit numbers fully automatic? Yes, but not always.

    PubMed

    García-Orza, Javier; Estudillo, Alejandro J; Calleja, Marina; Rodríguez, José Miguel

    2017-12-01

    Knowing the place-value of digits in multi-digit numbers allows us to identify, understand and distinguish between numbers with the same digits (e.g., 1492 vs. 1942). Research using the size congruency task has shown that the place-value in a string of three zeros and a non-zero digit (e.g., 0090) is processed automatically. In the present study, we explored whether place-value is also automatically activated when more complex numbers (e.g., 2795) are presented. Twenty-five participants were exposed to pairs of four-digit numbers that differed regarding the position of some digits and their physical size. Participants had to decide which of the two numbers was presented in a larger font size. In the congruent condition, the number shown in a bigger font size was numerically larger. In the incongruent condition, the number shown in a smaller font size was numerically larger. Two types of numbers were employed: numbers composed of three zeros and one non-zero digit (e.g., 0040-0400) and numbers composed of four non-zero digits (e.g., 2795-2759). Results showed larger congruency effects in more distant pairs in both type of numbers. Interestingly, this effect was considerably stronger in the strings composed of zeros. These results indicate that place-value coding is partially automatic, as it depends on the perceptual and numerical properties of the numbers to be processed.

  13. An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array

    PubMed Central

    Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo

    2016-01-01

    With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements’ bypass currents, which were injected into array’s non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT’s measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately. PMID:27929410

  14. An Improved Zero Potential Circuit for Readout of a Two-Dimensional Resistive Sensor Array.

    PubMed

    Wu, Jian-Feng; Wang, Feng; Wang, Qi; Li, Jian-Qing; Song, Ai-Guo

    2016-12-06

    With one operational amplifier (op-amp) in negative feedback, the traditional zero potential circuit could access one element in the two-dimensional (2-D) resistive sensor array with the shared row-column fashion but it suffered from the crosstalk problem for the non-scanned elements' bypass currents, which were injected into array's non-scanned electrodes from zero potential. Firstly, for suppressing the crosstalk problem, we designed a novel improved zero potential circuit with one more op-amp in negative feedback to sample the total bypass current and calculate the precision resistance of the element being tested (EBT) with it. The improved setting non-scanned-electrode zero potential circuit (S-NSE-ZPC) was given as an example for analyzing and verifying the performance of the improved zero potential circuit. Secondly, in the S-NSE-ZPC and the improved S-NSE-ZPC, the effects of different parameters of the resistive sensor arrays and their readout circuits on the EBT's measurement accuracy were simulated with the NI Multisim 12. Thirdly, part features of the improved circuit were verified with the experiments of a prototype circuit. Followed, the results were discussed and the conclusions were given. The experiment results show that the improved circuit, though it requires one more op-amp, one more resistor and one more sampling channel, can access the EBT in the 2-D resistive sensor array more accurately.

  15. Application of the θ-method to a telegraphic model of fluid flow in a dual-porosity medium

    NASA Astrophysics Data System (ADS)

    González-Calderón, Alfredo; Vivas-Cruz, Luis X.; Herrera-Hernández, Erik César

    2018-01-01

    This work focuses mainly on the study of numerical solutions, which are obtained using the θ-method, of a generalized Warren and Root model that includes a second-order wave-like equation in its formulation. The solutions approximately describe the single-phase hydraulic head in fractures by considering the finite velocity of propagation by means of a Cattaneo-like equation. The corresponding discretized model is obtained by utilizing a non-uniform grid and a non-uniform time step. A simple relationship is proposed to give the time-step distribution. Convergence is analyzed by comparing results from explicit, fully implicit, and Crank-Nicolson schemes with exact solutions: a telegraphic model of fluid flow in a single-porosity reservoir with relaxation dynamics, the Warren and Root model, and our studied model, which is solved with the inverse Laplace transform. We find that the flux and the hydraulic head have spurious oscillations that most often appear in small-time solutions but are attenuated as the solution time progresses. Furthermore, we show that the finite difference method is unable to reproduce the exact flux at time zero. Obtaining results for oilfield production times, which are in the order of months in real units, is only feasible using parallel implicit schemes. In addition, we propose simple parallel algorithms for the memory flux and for the explicit scheme.

  16. Symbolic, Nonsymbolic and Conceptual: An Across-Notation Study on the Space Mapping of Numerals.

    PubMed

    Zhang, Yu; You, Xuqun; Zhu, Rongjuan

    2016-07-01

    Previous studies suggested that there are interconnections between two numeral modalities of symbolic notation and nonsymbolic notation (array of dots), differences and similarities of the processing, and representation of the two modalities have both been found in previous research. However, whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation is still uninvestigated. The present study aims to examine whether there are differences between the spatial representation and numeral-space mapping of the two numeral modalities of symbolic notation and nonsymbolic notation; especially how zero, as both a symbolic magnitude numeral and a nonsymbolic conceptual numeral, mapping onto space; and if the mapping happens automatically at an early stage of the numeral information processing. Results of the two experiments demonstrate that the low-level processing of symbolic numerals including zero and nonsymbolic numerals except zero can mapping onto space, whereas the low-level processing of nonsymbolic zero as a semantic conceptual numeral cannot mapping onto space, which indicating the specialty of zero in the numeral domain. The present study indicates that the processing of non-semantic numerals can mapping onto space, whereas semantic conceptual numerals cannot mapping onto space. © The Author(s) 2016.

  17. On splice site prediction using weight array models: a comparison of smoothing techniques

    NASA Astrophysics Data System (ADS)

    Taher, Leila; Meinicke, Peter; Morgenstern, Burkhard

    2007-11-01

    In most eukaryotic genes, protein-coding exons are separated by non-coding introns which are removed from the primary transcript by a process called "splicing". The positions where introns are cut and exons are spliced together are called "splice sites". Thus, computational prediction of splice sites is crucial for gene finding in eukaryotes. Weight array models are a powerful probabilistic approach to splice site detection. Parameters for these models are usually derived from m-tuple frequencies in trusted training data and subsequently smoothed to avoid zero probabilities. In this study we compare three different ways of parameter estimation for m-tuple frequencies, namely (a) non-smoothed probability estimation, (b) standard pseudo counts and (c) a Gaussian smoothing procedure that we recently developed.

  18. Investigation of Propellant Sloshing and Zero Gravity Equilibrium for the Orion Service Module Propellant Tanks

    NASA Astrophysics Data System (ADS)

    Kreppel, Samantha

    A scaled model of the downstream Orion service module propellant tank was constructed to asses the propellant dynamics under reduced and zero-gravity conditions. Flight and ground data from the experiment is currently being used to validate computational models of propel-lant dynamics in Orion-class propellant tanks. The high fidelity model includes the internal structures of the propellant management device (PMD) and the mass-gauging probe. Qualita-tive differences between experimental and CFD data are understood in terms of fluid dynamical scaling of inertial effects in the scaled system. Propellant configurations in zero-gravity were studied at a range of fill-fractions and the settling time for various docking maneuvers was determined. A clear understanding of the fluid dynamics within the tank is necessary to en-sure proper control of the spacecraft's flight and to maintain safe operation of this and future service modules. Understanding slosh dynamics in partially-filled propellant tanks is essential to assessing spacecraft stability.

  19. Proceedings of the Annual Precise Time and Time Interval (PTTI) applications and Planning Meeting (20th) Held in Vienna, Virginia on 29 November-1 December 1988

    DTIC Science & Technology

    1988-12-01

    PERFORMANCE IN REAL TIME* Dr. James A. Barnes Austron Boulder, Co. Abstract Kalman filters and ARIMA models provide optimum control and evaluation tech...estimates of the model parameters (e.g., the phi’s and theta’s for an ARIMA model ). These model parameters are often evaluated in a batch mode on a...random walk FM, and linear frequency drift. In ARIMA models , this is equivalent to an ARIMA (0,2,2) with a non-zero average sec- ond difference. Using

  20. Collins and Sivers asymmetries in muonproduction of pions and kaons off transversely polarised protons

    DOE PAGES

    Adolph, C.; Akhunzyanov, R.; Alexeev, M. G.; ...

    2015-05-01

    Measurements of the Collins and Sivers asymmetries for charged pions and charged and neutral kaons produced in semi-inclusive deep-inelastic scattering of high energy muons off transversely polarised protons are presented. The results were obtained using all the available COMPASS proton data, which were taken in the years 2007 and 2010. The Collins asymmetries exhibit in the valence region a non-zero signal for pions and there are hints of non-zero signal also for kaons. The Sivers asymmetries are found to be positive for positive pions and kaons and compatible with zero otherwise.

  1. An Overview of NASA Efforts on Zero Boiloff Storage of Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Hastings, Leon J.; Plachta, D. W.; Salerno, L.; Kittel, P.; Haynes, Davy (Technical Monitor)

    2001-01-01

    Future mission planning within NASA has increasingly motivated consideration of cryogenic propellant storage durations on the order of years as opposed to a few weeks or months. Furthermore, the advancement of cryocooler and passive insulation technologies in recent years has substantially improved the prospects for zero boiloff storage of cryogenics. Accordingly, a cooperative effort by NASA's Ames Research Center (ARC), Glenn Research Center (GRC), and Marshall Space Flight Center (MSFC) has been implemented to develop and demonstrate "zero boiloff" concepts for in-space storage of cryogenic propellants, particularly liquid hydrogen and oxygen. ARC is leading the development of flight-type cryocoolers, GRC the subsystem development and small scale testing, and MSFC the large scale and integrated system level testing. Thermal and fluid modeling involves a combined effort by the three Centers. Recent accomplishments include: 1) development of "zero boiloff" analytical modeling techniques for sizing the storage tankage, passive insulation, cryocooler, power source mass, and radiators; 2) an early subscale demonstration with liquid hydrogen 3) procurement of a flight-type 10 watt, 95 K pulse tube cryocooler for liquid oxygen storage and 4) assembly of a large-scale test article for an early demonstration of the integrated operation of passive insulation, destratification/pressure control, and cryocooler (commercial unit) subsystems to achieve zero boiloff storage of liquid hydrogen. Near term plans include the large-scale integrated system demonstration testing this summer, subsystem testing of the flight-type pulse-tube cryocooler with liquid nitrogen (oxygen simulant), and continued development of a flight-type liquid hydrogen pulse tube cryocooler.

  2. Thermal Entanglement in XXZ Heisenberg Model for Coupled Spin-Half and Spin-One Triangular Cell

    NASA Astrophysics Data System (ADS)

    Najarbashi, Ghader; Balazadeh, Leila; Tavana, Ali

    2018-01-01

    In this paper, we investigate the thermal entanglement of two-spin subsystems in an ensemble of coupled spin-half and spin-one triangular cells, (1/2, 1/2, 1/2), (1/2, 1, 1/2), (1, 1/2, 1) and (1, 1, 1) with the XXZ anisotropic Heisenberg model subjected to an external homogeneous magnetic field. We adopt the generalized concurrence as the measure of entanglement which is a good indicator of the thermal entanglement and the critical points in the mixed higher dimensional spin systems. We observe that in the near vicinity of the absolute zero, the concurrence measure is symmetric with respect to zero magnetic field and changes abruptly from a non-null to null value for a critical magnetic field that can be signature of a quantum phase transition at finite temperature. The analysis of concurrence versus temperature shows that there exists a critical temperature, that depends on the type of the interaction, i.e. ferromagnetic or antiferromagnetic, the anisotropy parameter and the strength of the magnetic field. Results show that the pairwise thermal entanglement depends on the third spin which affects the maximum value of the concurrence at absolute zero and at quantum critical points.

  3. Lattice vibrations in the Frenkel-Kontorova model. I. Phonon dispersion, number density, and energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Qingping; Wu, Lijun; Welch, David O.

    2015-06-17

    We studied the lattice vibrations of two inter-penetrating atomic sublattices via the Frenkel-Kontorova (FK) model of a linear chain of harmonically interacting atoms subjected to an on-site potential, using the technique of thermodynamic Green's functions based on quantum field-theoretical methods. General expressions were deduced for the phonon frequency-wave-vector dispersion relations, number density, and energy of the FK model system. In addition, as the application of the theory, we investigated in detail cases of linear chains with various periods of the on-site potential of the FK model. Some unusual but interesting features for different amplitudes of the on-site potential of themore » FK model are discussed. In the commensurate structure, the phonon spectrum always starts at a finite frequency, and the gaps of the spectrum are true ones with a zero density of modes. In the incommensurate structure, the phonon spectrum starts from zero frequency, but at a non-zero wave vector; there are some modes inside these gap regions, but their density is very low. In our approximation, the energy of a higher-order commensurate state of the one-dimensional system at a finite temperature may become indefinitely close to the energy of an incommensurate state. This finding implies that the higher-order incommensurate-commensurate transitions are continuous ones and that the phase transition may exhibit a “devil's staircase” behavior at a finite temperature.« less

  4. Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.

    PubMed

    He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L

    2015-10-01

    Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.

  5. Evaluation of stochastic differential equation approximation of ion channel gating models.

    PubMed

    Bruce, Ian C

    2009-04-01

    Fox and Lu derived an algorithm based on stochastic differential equations for approximating the kinetics of ion channel gating that is simpler and faster than "exact" algorithms for simulating Markov process models of channel gating. However, the approximation may not be sufficiently accurate to predict statistics of action potential generation in some cases. The objective of this study was to develop a framework for analyzing the inaccuracies and determining their origin. Simulations of a patch of membrane with voltage-gated sodium and potassium channels were performed using an exact algorithm for the kinetics of channel gating and the approximate algorithm of Fox & Lu. The Fox & Lu algorithm assumes that channel gating particle dynamics have a stochastic term that is uncorrelated, zero-mean Gaussian noise, whereas the results of this study demonstrate that in many cases the stochastic term in the Fox & Lu algorithm should be correlated and non-Gaussian noise with a non-zero mean. The results indicate that: (i) the source of the inaccuracy is that the Fox & Lu algorithm does not adequately describe the combined behavior of the multiple activation particles in each sodium and potassium channel, and (ii) the accuracy does not improve with increasing numbers of channels.

  6. Modeling electron fractionalization with unconventional Fock spaces.

    PubMed

    Cobanera, Emilio

    2017-08-02

    It is shown that certain fractionally-charged quasiparticles can be modeled on D-dimensional lattices in terms of unconventional yet simple Fock algebras of creation and annihilation operators. These unconventional Fock algebras are derived from the usual fermionic algebra by taking roots (the square root, cubic root, etc) of the usual fermionic creation and annihilation operators. If the fermions carry non-Abelian charges, then this approach fractionalizes the Abelian charges only. In particular, the mth-root of a spinful fermion carries charge e/m and spin 1/2. Just like taking a root of a complex number, taking a root of a fermion yields a mildly non-unique result. As a consequence, there are several possible choices of quantum exchange statistics for fermion-root quasiparticles. These choices are tied to the dimensionality [Formula: see text] of the lattice by basic physical considerations. One particular family of fermion-root quasiparticles is directly connected to the parafermion zero-energy modes expected to emerge in certain mesoscopic devices involving fractional quantum Hall states. Hence, as an application of potential mesoscopic interest, I investigate numerically the hybridization of Majorana and parafermion zero-energy edge modes caused by fractionalizing but charge-conserving tunneling.

  7. Theory of point contact spectroscopy in correlated materials

    DOE PAGES

    Lee, Wei-Cheng; Park, Wan Kyu; Arham, Hamood Z.; ...

    2015-01-05

    Here, we developed a microscopic theory for the point-contact conductance between a metallic electrode and a strongly correlated material using the nonequilibrium Schwinger-Kadanoff-Baym-Keldysh formalism. We explicitly show that, in the classical limit, contact size shorter than the scattering length of the system, the microscopic model can be reduced to an effective model with transfer matrix elements that conserve in-plane momentum. We found that the conductance dI/dV is proportional to the effective density of states, that is, the integrated single-particle spectral function A(ω = eV) over the whole Brillouin zone. From this conclusion, we are able to establish the conditions undermore » which a non-Fermi liquid metal exhibits a zero-bias peak in the conductance. Lastly, this finding is discussed in the context of recent point-contact spectroscopy on the iron pnictides and chalcogenides, which has exhibited a zero-bias conductance peak.« less

  8. Frustration in Condensed Matter and Protein Folding

    NASA Astrophysics Data System (ADS)

    Lorelli, S.; Cabot, A.; Sundarprasad, N.; Boekema, C.

    Using computer modeling we study frustration in condensed matter and protein folding. Frustration is due to random and/or competing interactions. One definition of frustration is the sum of squares of the differences between actual and expected distances between characters. If this sum is non-zero, then the system is said to have frustration. A simulation tracks the movement of characters to lower their frustration. Our research is conducted on frustration as a function of temperature using a logarithmic scale. At absolute zero, the relaxation for frustration is a power function for randomly assigned patterns or an exponential function for regular patterns like Thomson figures. These findings have implications for protein folding; we attempt to apply our frustration modeling to protein folding and dynamics. We use coding in Python to simulate different ways a protein can fold. An algorithm is being developed to find the lowest frustration (and thus energy) states possible. Research supported by SJSU & AFC.

  9. The digital step edge

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.

    1982-01-01

    The facet model was used to accomplish step edge detection. The essence of the facet model is that any analysis made on the basis of the pixel values in some neighborhood has its final authoritative interpretation relative to the underlying grey tone intensity surface of which the neighborhood pixel values are observed noisy samples. Pixels which are part of regions have simple grey tone intensity surfaces over their areas. Pixels which have an edge in them have complex grey tone intensity surfaces over their areas. Specially, an edge moves through a pixel only if there is some point in the pixel's area having a zero crossing of the second directional derivative taken in the direction of a non-zero gradient at the pixel's center. To determine whether or not a pixel should be marked as a step edge pixel, its underlying grey tone intensity surface was estimated on the basis of the pixels in its neighborhood.

  10. Local tuning of the order parameter in superconducting weak links: A zero-inductance nanodevice

    NASA Astrophysics Data System (ADS)

    Winik, Roni; Holzman, Itamar; Dalla Torre, Emanuele G.; Buks, Eyal; Ivry, Yachin

    2018-03-01

    Controlling both the amplitude and the phase of the superconducting quantum order parameter (" separators="|ψ ) in nanostructures is important for next-generation information and communication technologies. The lack of electric resistance in superconductors, which may be advantageous for some technologies, hinders convenient voltage-bias tuning and hence limits the tunability of ψ at the microscopic scale. Here, we demonstrate the local tunability of the phase and amplitude of ψ, obtained by patterning with a single lithography step a Nb nano-superconducting quantum interference device (nano-SQUID) that is biased at its nanobridges. We accompany our experimental results by a semi-classical linearized model that is valid for generic nano-SQUIDs with multiple ports and helps simplify the modelling of non-linear couplings among the Josephson junctions. Our design helped us reveal unusual electric characteristics with effective zero inductance, which is promising for nanoscale magnetic sensing and quantum technologies.

  11. Control of a flexible bracing manipulator: Integration of current research work to realize the bracing manipulator

    NASA Technical Reports Server (NTRS)

    Kwon, Dong-Soo

    1991-01-01

    All research results about flexible manipulator control were integrated to show a control scenario of a bracing manipulator. First, dynamic analysis of a flexible manipulator was done for modeling. Second, from the dynamic model, the inverse dynamic equation was derived, and the time domain inverse dynamic method was proposed for the calculation of the feedforward torque and the desired flexible coordinate trajectories. Third, a tracking controller was designed by combining the inverse dynamic feedforward control with the joint feedback control. The control scheme was applied to the tip position control of a single link flexible manipulator for zero and non-zero initial condition cases. Finally, the contact control scheme was added to the position tracking control. A control scenario of a bracing manipulator is provided and evaluated through simulation and experiment on a single link flexible manipulator.

  12. Energetics of slope flows: linear and weakly nonlinear solutions of the extended Prandtl model

    NASA Astrophysics Data System (ADS)

    Güttler, Ivan; Marinović, Ivana; Večenaj, Željko; Grisogono, Branko

    2016-07-01

    The Prandtl model succinctly combines the 1D stationary boundary-layer dynamics and thermodynamics of simple anabatic and katabatic flows over uniformly inclined surfaces. It assumes a balance between the along-the-slope buoyancy component and adiabatic warming/cooling, and the turbulent mixing of momentum and heat. In this study, energetics of the Prandtl model is addressed in terms of the total energy (TE) concept. Furthermore, since the authors recently developed a weakly nonlinear version of the Prandtl model, the TE approach is also exercised on this extended model version, which includes an additional nonlinear term in the thermodynamic equation. Hence, interplay among diffusion, dissipation and temperature-wind interaction of the mean slope flow is further explored. The TE of the nonlinear Prandtl model is assessed in an ensemble of solutions where the Prandtl number, the slope angle and the nonlinearity parameter are perturbed. It is shown that nonlinear effects have the lowest impact on variability in the ensemble of solutions of the weakly nonlinear Prandtl model when compared to the other two governing parameters. The general behavior of the nonlinear solution is similar to the linear solution, except that the maximum of the along-the-slope wind speed in the nonlinear solution reduces for larger slopes. Also, the dominance of PE near the sloped surface, and the elevated maximum of KE in the linear and nonlinear energetics of the extended Prandtl model are found in the PASTEX-94 measurements. The corresponding level where KE>PE most likely marks the bottom of the sublayer subject to shear-driven instabilities. Finally, possible limitations of the weakly nonlinear solutions of the extended Prandtl model are raised. In linear solutions, the local storage of TE term is zero, reflecting the stationarity of solutions by definition. However, in nonlinear solutions, the diffusion, dissipation and interaction terms (where the height of the maximum interaction is proportional to the height of the low-level jet by the factor ≈4/9) do not balance and the local storage of TE attains non-zero values. In order to examine the issue of non-stationarity, the inclusion of velocity-pressure covariance in the momentum equation is suggested for future development of the extended Prandtl model.

  13. Understanding the contribution of non-carbon dioxide gases in deep mitigation scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gernaat, David; Calvin, Katherine V.; Lucas, Paul

    2015-07-01

    The combined 2010 emissions of methane (CH4), nitrous oxide (N2O) and the fluorinated gasses (F-gas) account for about 20-30% of total emissions and about 30% of radiative forcing. At the moment, most studies looking at reaching ambitious climate targets project the emission of carbon dioxide (CO2) to be reduced to zero (or less) by the end of the century. As for non-CO2 gases, the mitigation potential seem to be more constrained, we find that by the end of the century in the current deep mitigation scenarios non-CO2 emissions could form the lion’s share of remaining greenhouse gas emissions. In ordermore » to support effective climate policy strategies, in this paper we provide a more in-depth look at the role of non-CO2¬ emission sources (CH4, N2O and F-gases) in achieving deep mitigation targets (radiative forcing target of 2.8 W/m2 in 2100). Specifically, we look at the sectorial mitigation potential and the remaining non-CO2 emissions. By including a set of different models, we provide some insights into the associated uncertainty. Most of the remaining methane emissions in 2100 in the climate mitigation scenario come from the livestock sector. Strong reductions are seen in the energy supply sector across all models. For N2O, less reduction potential is seen compared to methane and the sectoral differences are larger between the models. The paper shows that the assumptions on remaining non-CO2 emissions are critical for the feasibility of reaching ambitious climate targets and the associated costs.« less

  14. Map scale effects on estimating the number of undiscovered mineral deposits

    USGS Publications Warehouse

    Singer, D.A.; Menzie, W.D.

    2008-01-01

    Estimates of numbers of undiscovered mineral deposits, fundamental to assessing mineral resources, are affected by map scale. Where consistently defined deposits of a particular type are estimated, spatial and frequency distributions of deposits are linked in that some frequency distributions can be generated by processes randomly in space whereas others are generated by processes suggesting clustering in space. Possible spatial distributions of mineral deposits and their related frequency distributions are affected by map scale and associated inclusions of non-permissive or covered geological settings. More generalized map scales are more likely to cause inclusion of geologic settings that are not really permissive for the deposit type, or that include unreported cover over permissive areas, resulting in the appearance of deposit clustering. Thus, overly generalized map scales can cause deposits to appear clustered. We propose a model that captures the effects of map scale and the related inclusion of non-permissive geologic settings on numbers of deposits estimates, the zero-inflated Poisson distribution. Effects of map scale as represented by the zero-inflated Poisson distribution suggest that the appearance of deposit clustering should diminish as mapping becomes more detailed because the number of inflated zeros would decrease with more detailed maps. Based on observed worldwide relationships between map scale and areas permissive for deposit types, mapping at a scale with twice the detail should cut permissive area size of a porphyry copper tract to 29% and a volcanic-hosted massive sulfide tract to 50% of their original sizes. Thus some direct benefits of mapping an area at a more detailed scale are indicated by significant reductions in areas permissive for deposit types, increased deposit density and, as a consequence, reduced uncertainty in the estimate of number of undiscovered deposits. Exploration enterprises benefit from reduced areas requiring detailed and expensive exploration, and land-use planners benefit from reduced areas of concern. ?? 2008 International Association for Mathematical Geology.

  15. Drug Delivery and Transport into the Central Circulation: An Example of Zero-Order In vivo Absorption of Rotigotine from a Transdermal Patch Formulation.

    PubMed

    Cawello, Willi; Braun, Marina; Andreas, Jens-Otto

    2018-01-13

    Pharmacokinetic studies using deconvolution methods and non-compartmental analysis to model clinical absorption of drugs are not well represented in the literature. The purpose of this research was (1) to define the system of equations for description of rotigotine (a dopamine receptor agonist delivered via a transdermal patch) absorption based on a pharmacokinetic model and (2) to describe the kinetics of rotigotine disposition after single and multiple dosing. The kinetics of drug disposition was evaluated based on rotigotine plasma concentration data from three phase 1 trials. In two trials, rotigotine was administered via a single patch over 24 h in healthy subjects. In a third trial, rotigotine was administered once daily over 1 month in subjects with early-stage Parkinson's disease (PD). A pharmacokinetic model utilizing deconvolution methods was developed to describe the relationship between drug release from the patch and plasma concentrations. Plasma-concentration over time profiles were modeled based on a one-compartment model with a time lag, a zero-order input (describing a constant absorption via skin into central circulation) and first-order elimination. Corresponding mathematical models for single- and multiple-dose administration were developed. After single-dose administration of rotigotine patches (using 2, 4 or 8 mg/day) in healthy subjects, a constant in vivo absorption was present after a minor time lag (2-3 h). On days 27 and 30 of the multiple-dose study in patients with PD, absorption was constant during patch-on periods and resembled zero-order kinetics. Deconvolution based on rotigotine pharmacokinetic profiles after single- or multiple-dose administration of the once-daily patch demonstrated that in vivo absorption of rotigotine showed constant input through the skin into the central circulation (resembling zero-order kinetics). Continuous absorption through the skin is a basis for stable drug exposure.

  16. Radiative breaking of the minimal supersymmetric left–right model

    DOE PAGES

    Okada, Nobuchika; Papapietro, Nathan

    2016-03-03

    We study a variation to the SUSY Left-Right symmetric model based on the gauge group SU(3) c×SU(2) L×SU(2) R×U(1) BL. Beyond the quark and lepton superfields we only introduce a second Higgs bidoublet to produce realistic fermion mass matrices. This model does not include any SU(2) R triplets. We also calculate renormalization group evolutions of soft SUSY parameters at the one-loop level down to low energy. We find that an SU(2) R slepton doublet acquires a negative mass squared at low energies, so that the breaking of SU(2) R×U(1) BL→U(1) Y is realized by a non-zero vacuum expectation value ofmore » a right-handed sneutrino. Small neutrino masses are produced through neutrino mixings with gauginos. We obtained mass limits on the SU(2) R×U(1) BL sector by direct search results at the LHC as well as lepton-gaugino mixing bounds from the LEP precision data.« less

  17. Rotational diffusion of a molecular cat

    NASA Astrophysics Data System (ADS)

    Katz-Saporta, Ori; Efrati, Efi

    We show that a simple isolated system can perform rotational random walk on account of internal excitations alone. We consider the classical dynamics of a ''molecular cat'': a triatomic molecule connected by three harmonic springs with non-zero rest lengths, suspended in free space. In this system, much like for falling cats, the angular momentum constraint is non-holonomic allowing for rotations with zero overall angular momentum. The geometric nonlinearities arising from the non-zero rest lengths of the springs suffice to break integrability and lead to chaotic dynamics. The coupling of the non-integrability of the system and its non-holonomic nature results in an angular random walk of the molecule. We study the properties and dynamics of this angular motion analytically and numerically. For low energy excitations the system displays normal-mode-like motion, while for high enough excitation energy we observe regular random-walk. In between, at intermediate energies we observe an angular Lévy-walk type motion associated with a fractional diffusion coefficient interpolating between the two regimes.

  18. Nonlinear threshold effect in the Z-scan method of characterizing limiters for high-intensity laser light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tereshchenko, S. A., E-mail: tsa@miee.ru; Savelyev, M. S.; Podgaetsky, V. M.

    A threshold model is described which permits one to determine the properties of limiters for high-powered laser light. It takes into account the threshold characteristics of the nonlinear optical interaction between the laser beam and the limiter working material. The traditional non-threshold model is a particular case of the threshold model when the limiting threshold is zero. The nonlinear characteristics of carbon nanotubes in liquid and solid media are obtained from experimental Z-scan data. Specifically, the nonlinear threshold effect was observed for aqueous dispersions of nanotubes, but not for nanotubes in solid polymethylmethacrylate. The threshold model fits the experimental Z-scanmore » data better than the non-threshold model. Output characteristics were obtained that integrally describe the nonlinear properties of the optical limiters.« less

  19. Holography for Heavy Ions Collisions at LHC and NICA

    NASA Astrophysics Data System (ADS)

    Aref'eva, Irina

    2017-12-01

    This is a contribution for the Proceedings of 5th International Conference on New Frontiers in Physics (ICNFP 2016), held at Crete, 6-14 July 2016. Our goal is to obtain phenomenologically reliable insights for the physics of the quark-gluon plasma (QGP) from the holography. I briefly review how in the holographical setup one can describe the QGP formation in heavy ion collisions and how to get quantitatively the main characteristics of the QGP formation - the total multiplicity and the thermalization time. To fit the experimental form of dependence of total multiplicity on energy, obtained at LHC, we have to deal with a special anisotropic holographic model, related with the Lifshitz-type background. Our conjecture is that this Lifshitz-type background with non-zero chemical potential can be used to describe future data expected from NICA. In particular, we present the results of calculations the holographic confinement/deconfinement phase transition in the (µ, T) (chemical potential, temperature) plane in this anizotropic background and show the dependence of the transition line on the orientation of the quark pair. This dependence leads to a non-sharp character of physical confinement/deconfinement phase in the (µ, T)-plane. We use the bottom-up soft wall approach incorporating quark confinement deforming factor and vector field providing the non-zero chemical potential. In this model we also estimate the holographic photon production.

  20. AdS 2 holographic dictionary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cvetic, Mirjam; Papadimitriou, Ioannis

    Here, we construct the holographic dictionary for both running and constant dilaton solutions of the two dimensional Einstein-Maxwell-Dilaton theory that is obtained by a circle reduction from Einstein-Hilbert gravity with negative cosmological constant in three dimensions. This specific model ensures that the dual theory has a well defined ultraviolet completion in terms of a two dimensional conformal field theory, but our results apply qualitatively to a wider class of two dimensional dilaton gravity theories. For each type of solutions we perform holographic renormalization, compute the exact renormalized one-point functions in the presence of arbitrary sources, and derive the asymptotic symmetriesmore » and the corresponding conserved charges. In both cases we find that the scalar operator dual to the dilaton plays a crucial role in the description of the dynamics. Its source gives rise to a matter conformal anomaly for the running dilaton solutions, while its expectation value is the only non trivial observable for constant dilaton solutions. The role of this operator has been largely overlooked in the literature. We further show that the only non trivial conserved charges for running dilaton solutions are the mass and the electric charge, while for constant dilaton solutions only the electric charge is non zero. However, by uplifting the solutions to three dimensions we show that constant dilaton solutions can support non trivial extended symmetry algebras, including the one found by Compère, Song and Strominger, in agreement with the results of Castro and Song. Finally, we demonstrate that any solution of this specific dilaton gravity model can be uplifted to a family of asymptotically AdS 2 × S 2 or conformally AdS 2 × S 2 solutions of the STU model in four dimensions, including non extremal black holes. As a result, the four dimensional solutions obtained by uplifting the running dilaton solutions coincide with the so called ‘subtracted geometries’, while those obtained from the uplift of the constant dilaton ones are new.« less

  1. AdS 2 holographic dictionary

    DOE PAGES

    Cvetic, Mirjam; Papadimitriou, Ioannis

    2016-12-02

    Here, we construct the holographic dictionary for both running and constant dilaton solutions of the two dimensional Einstein-Maxwell-Dilaton theory that is obtained by a circle reduction from Einstein-Hilbert gravity with negative cosmological constant in three dimensions. This specific model ensures that the dual theory has a well defined ultraviolet completion in terms of a two dimensional conformal field theory, but our results apply qualitatively to a wider class of two dimensional dilaton gravity theories. For each type of solutions we perform holographic renormalization, compute the exact renormalized one-point functions in the presence of arbitrary sources, and derive the asymptotic symmetriesmore » and the corresponding conserved charges. In both cases we find that the scalar operator dual to the dilaton plays a crucial role in the description of the dynamics. Its source gives rise to a matter conformal anomaly for the running dilaton solutions, while its expectation value is the only non trivial observable for constant dilaton solutions. The role of this operator has been largely overlooked in the literature. We further show that the only non trivial conserved charges for running dilaton solutions are the mass and the electric charge, while for constant dilaton solutions only the electric charge is non zero. However, by uplifting the solutions to three dimensions we show that constant dilaton solutions can support non trivial extended symmetry algebras, including the one found by Compère, Song and Strominger, in agreement with the results of Castro and Song. Finally, we demonstrate that any solution of this specific dilaton gravity model can be uplifted to a family of asymptotically AdS 2 × S 2 or conformally AdS 2 × S 2 solutions of the STU model in four dimensions, including non extremal black holes. As a result, the four dimensional solutions obtained by uplifting the running dilaton solutions coincide with the so called ‘subtracted geometries’, while those obtained from the uplift of the constant dilaton ones are new.« less

  2. Majorana bound states from exceptional points in non-topological superconductors

    PubMed Central

    San-Jose, Pablo; Cayao, Jorge; Prada, Elsa; Aguado, Ramón

    2016-01-01

    Recent experimental efforts towards the detection of Majorana bound states have focused on creating the conditions for topological superconductivity. Here we demonstrate an alternative route, which achieves fully localised zero-energy Majorana bound states when a topologically trivial superconductor is strongly coupled to a helical normal region. Such a junction can be experimentally realised by e.g. proximitizing a finite section of a nanowire with spin-orbit coupling, and combining electrostatic depletion and a Zeeman field to drive the non-proximitized (normal) portion into a helical phase. Majorana zero modes emerge in such an open system without fine-tuning as a result of charge-conjugation symmetry, and can be ultimately linked to the existence of ‘exceptional points’ (EPs) in parameter space, where two quasibound Andreev levels bifurcate into two quasibound Majorana zero modes. After the EP, one of the latter becomes non-decaying as the junction approaches perfect Andreev reflection, thus resulting in a Majorana dark state (MDS) localised at the NS junction. We show that MDSs exhibit the full range of properties associated to conventional closed-system Majorana bound states (zero-energy, self-conjugation, 4π-Josephson effect and non-Abelian braiding statistics), while not requiring topological superconductivity. PMID:26865011

  3. Zero-inflated count models for longitudinal measurements with heterogeneous random effects.

    PubMed

    Zhu, Huirong; Luo, Sheng; DeSantis, Stacia M

    2017-08-01

    Longitudinal zero-inflated count data arise frequently in substance use research when assessing the effects of behavioral and pharmacological interventions. Zero-inflated count models (e.g. zero-inflated Poisson or zero-inflated negative binomial) with random effects have been developed to analyze this type of data. In random effects zero-inflated count models, the random effects covariance matrix is typically assumed to be homogeneous (constant across subjects). However, in many situations this matrix may be heterogeneous (differ by measured covariates). In this paper, we extend zero-inflated count models to account for random effects heterogeneity by modeling their variance as a function of covariates. We show via simulation that ignoring intervention and covariate-specific heterogeneity can produce biased estimates of covariate and random effect estimates. Moreover, those biased estimates can be rectified by correctly modeling the random effects covariance structure. The methodological development is motivated by and applied to the Combined Pharmacotherapies and Behavioral Interventions for Alcohol Dependence (COMBINE) study, the largest clinical trial of alcohol dependence performed in United States with 1383 individuals.

  4. Dunkl operator, integrability, and pairwise scattering in rational Calogero model

    NASA Astrophysics Data System (ADS)

    Karakhanyan, David

    2017-05-01

    The integrability of the Calogero model can be expressed as zero curvature condition using Dunkl operators. The corresponding flat connections are non-local gauge transformations, which map the Calogero wave functions to symmetrized wave functions of the set of N free particles, i.e. it relates the corresponding scattering matrices to each other. The integrability of the Calogero model implies that any k-particle scattering is reduced to successive pairwise scatterings. The consistency condition of this requirement is expressed by the analog of the Yang-Baxter relation.

  5. Zero-field magnetic response functions in Landau levels

    PubMed Central

    Gao, Yang; Niu, Qian

    2017-01-01

    We present a fresh perspective on the Landau level quantization rule; that is, by successively including zero-field magnetic response functions at zero temperature, such as zero-field magnetization and susceptibility, the Onsager’s rule can be corrected order by order. Such a perspective is further reinterpreted as a quantization of the semiclassical electron density in solids. Our theory not only reproduces Onsager’s rule at zeroth order and the Berry phase and magnetic moment correction at first order but also explains the nature of higher-order corrections in a universal way. In applications, those higher-order corrections are expected to curve the linear relation between the level index and the inverse of the magnetic field, as already observed in experiments. Our theory then provides a way to extract the correct value of Berry phase as well as the magnetic susceptibility at zero temperature from Landau level fan diagrams in experiments. Moreover, it can be used theoretically to calculate Landau levels up to second-order accuracy for realistic models. PMID:28655849

  6. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    PubMed

    Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  7. Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

    PubMed Central

    Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  8. Persistently Auxetic Materials: Engineering the Poisson Ratio of 2D Self-Avoiding Membranes under Conditions of Non-Zero Anisotropic Strain.

    PubMed

    Ulissi, Zachary W; Govind Rajan, Ananth; Strano, Michael S

    2016-08-23

    Entropic surfaces represented by fluctuating two-dimensional (2D) membranes are predicted to have desirable mechanical properties when unstressed, including a negative Poisson's ratio ("auxetic" behavior). Herein, we present calculations of the strain-dependent Poisson ratio of self-avoiding 2D membranes demonstrating desirable auxetic properties over a range of mechanical strain. Finite-size membranes with unclamped boundary conditions have positive Poisson's ratio due to spontaneous non-zero mean curvature, which can be suppressed with an explicit bending rigidity in agreement with prior findings. Applying longitudinal strain along a singular axis to this system suppresses this mean curvature and the entropic out-of-plane fluctuations, resulting in a molecular-scale mechanism for realizing a negative Poisson's ratio above a critical strain, with values significantly more negative than the previously observed zero-strain limit for infinite sheets. We find that auxetic behavior persists over surprisingly high strains of more than 20% for the smallest surfaces, with desirable finite-size scaling producing surfaces with negative Poisson's ratio over a wide range of strains. These results promise the design of surfaces and composite materials with tunable Poisson's ratio by prestressing platelet inclusions or controlling the surface rigidity of a matrix of 2D materials.

  9. Frostbite of the liver: an unrecognized cause of primary non-function?

    PubMed

    Potanos, Kristina; Kim, Heung Bae

    2014-02-01

    Appropriate hypothermic packaging techniques are an essential part of organ procurement. We present a case in which deviation from standard packaging practice may have caused sub-zero storage temperatures during transport, resulting in a clinical picture resembling PNF. An 18-month-old male with alpha-1-antitrypsin deficiency underwent liver transplant from a size-matched pediatric donor. Upon arrival at the recipient hospital, ice crystals were noted in the UW solution. The transplant proceeded uneventfully with short ischemia times. Surprisingly, transaminases, INR, and total bilirubin were markedly elevated in the postoperative period but returned to near normal by discharge. Follow-up of over five yr has demonstrated normal liver function. Upon review, it was discovered that organ packaging during recovery included storage in the first bag with only 400 mL of UW solution, and pure ice in the second bag instead of slush. This suggests that the postoperative delayed graft function was related to sub-zero storage of the graft during transport. This is the first report of sub-zero cold injury, or frostbite, following inappropriate packaging of an otherwise healthy donor liver. The clinical picture closely resembled PNF, perhaps implicating this mechanism in other unexpected cases of graft non-function. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Active and passive controls of Jeffrey nanofluid flow over a nonlinear stretching surface

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Alsaedi, Ahmed

    This communication explores magnetohydrodynamic (MHD) boundary-layer flow of Jeffrey nanofluid over a nonlinear stretching surface with active and passive controls of nanoparticles. A nonlinear stretching surface generates the flow. Effects of thermophoresis and Brownian diffusion are considered. Jeffrey fluid is electrically conducted subject to non-uniform magnetic field. Low magnetic Reynolds number and boundary-layer approximations have been considered in mathematical modelling. The phenomena of impulsing the particles away from the surface in combination with non-zero mass flux condition is known as the condition of zero mass flux. Convergent series solutions for the nonlinear governing system are established through optimal homotopy analysis method (OHAM). Graphs have been sketched in order to analyze that how the temperature and concentration distributions are affected by distinct physical flow parameters. Skin friction coefficient and local Nusselt and Sherwood numbers are also computed and analyzed. Our findings show that the temperature and concentration distributions are increasing functions of Hartman number and thermophoresis parameter.

  11. ON THE FERMI -GBM EVENT 0.4 s AFTER GW150914

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greiner, J.; Yu, H.-F.; Burgess, J. M.

    In view of the recent report by Connaughton et al., we analyze continuous time-tagged event (TTE) data of Fermi -gamma-ray burst monitor (GBM) around the time of the gravitational-wave event GW 150914. We find that after proper accounting for low-count statistics, the GBM transient event at 0.4 s after GW 150914 is likely not due to an astrophysical source, but consistent with a background fluctuation, removing the tension between the INTEGRAL /ACS non-detection and GBM. Additionally, reanalysis of other short GRBs shows that without proper statistical modeling the fluence of faint events is over-predicted, as verified for some joint GBM–ACSmore » detections of short GRBs. We detail the statistical procedure to correct these biases. As a result, faint short GRBs, verified by ACS detections, with significances in the broadband light curve even smaller than that of the GBM–GW150914 event are recovered as proper non-zero source, while the GBM–GW150914 event is consistent with zero fluence.« less

  12. Zero-Sum Bias: Perceived Competition Despite Unlimited Resources

    PubMed Central

    Meegan, Daniel V.

    2010-01-01

    Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students’ grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed. PMID:21833251

  13. Zero-sum bias: perceived competition despite unlimited resources.

    PubMed

    Meegan, Daniel V

    2010-01-01

    Zero-sum bias describes intuitively judging a situation to be zero-sum (i.e., resources gained by one party are matched by corresponding losses to another party) when it is actually non-zero-sum. The experimental participants were students at a university where students' grades are determined by how the quality of their work compares to a predetermined standard of quality rather than to the quality of the work produced by other students. This creates a non-zero-sum situation in which high grades are an unlimited resource. In three experiments, participants were shown the grade distribution after a majority of the students in a course had completed an assigned presentation, and asked to predict the grade of the next presenter. When many high grades had already been given, there was a corresponding increase in low grade predictions. This suggests a zero-sum bias, in which people perceive a competition for a limited resource despite unlimited resource availability. Interestingly, when many low grades had already been given, there was not a corresponding increase in high grade predictions. This suggests that a zero-sum heuristic is only applied in response to the allocation of desirable resources. A plausible explanation for the findings is that a zero-sum heuristic evolved as a cognitive adaptation to enable successful intra-group competition for limited resources. Implications for understanding inter-group interaction are also discussed.

  14. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach.

    PubMed

    Mohammadi, Tayeb; Kheiri, Soleiman; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables "number of blood donation" and "number of blood deferral": as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models.

  15. Analysis of Blood Transfusion Data Using Bivariate Zero-Inflated Poisson Model: A Bayesian Approach

    PubMed Central

    Mohammadi, Tayeb; Sedehi, Morteza

    2016-01-01

    Recognizing the factors affecting the number of blood donation and blood deferral has a major impact on blood transfusion. There is a positive correlation between the variables “number of blood donation” and “number of blood deferral”: as the number of return for donation increases, so does the number of blood deferral. On the other hand, due to the fact that many donors never return to donate, there is an extra zero frequency for both of the above-mentioned variables. In this study, in order to apply the correlation and to explain the frequency of the excessive zero, the bivariate zero-inflated Poisson regression model was used for joint modeling of the number of blood donation and number of blood deferral. The data was analyzed using the Bayesian approach applying noninformative priors at the presence and absence of covariates. Estimating the parameters of the model, that is, correlation, zero-inflation parameter, and regression coefficients, was done through MCMC simulation. Eventually double-Poisson model, bivariate Poisson model, and bivariate zero-inflated Poisson model were fitted on the data and were compared using the deviance information criteria (DIC). The results showed that the bivariate zero-inflated Poisson regression model fitted the data better than the other models. PMID:27703493

  16. Thermodynamic model of social influence on two-dimensional square lattice: Case for two features

    NASA Astrophysics Data System (ADS)

    Genzor, Jozef; Bužek, Vladimír; Gendiar, Andrej

    2015-02-01

    We propose a thermodynamic multi-state spin model in order to describe equilibrial behavior of a society. Our model is inspired by the Axelrod model used in social network studies. In the framework of the statistical mechanics language, we analyze phase transitions of our model, in which the spin interaction J is interpreted as a mutual communication among individuals forming a society. The thermal fluctuations introduce a noise T into the communication, which suppresses long-range correlations. Below a certain phase transition point Tt, large-scale clusters of the individuals, who share a specific dominant property, are formed. The measure of the cluster sizes is an order parameter after spontaneous symmetry breaking. By means of the Corner transfer matrix renormalization group algorithm, we treat our model in the thermodynamic limit and classify the phase transitions with respect to inherent degrees of freedom. Each individual is chosen to possess two independent features f = 2 and each feature can assume one of q traits (e.g. interests). Hence, each individual is described by q2 degrees of freedom. A single first-order phase transition is detected in our model if q > 2, whereas two distinct continuous phase transitions are found if q = 2 only. Evaluating the free energy, order parameters, specific heat, and the entanglement von Neumann entropy, we classify the phase transitions Tt(q) in detail. The permanent existence of the ordered phase (the large-scale cluster formation with a non-zero order parameter) is conjectured below a non-zero transition point Tt(q) ≈ 0.5 in the asymptotic regime q → ∞.

  17. Nonperturbative quantization of the electroweak model's electrodynamic sector

    NASA Astrophysics Data System (ADS)

    Fry, M. P.

    2015-04-01

    Consider the Euclidean functional integral representation of any physical process in the electroweak model. Integrating out the fermion degrees of freedom introduces 24 fermion determinants. These multiply the Gaussian functional measures of the Maxwell, Z , W , and Higgs fields to give an effective functional measure. Suppose the functional integral over the Maxwell field is attempted first. This paper is concerned with the large amplitude behavior of the Maxwell effective measure. It is assumed that the large amplitude variation of this measure is insensitive to the presence of the Z , W , and H fields; they are assumed to be a subdominant perturbation of the large amplitude Maxwell sector. Accordingly, we need only examine the large amplitude variation of a single QED fermion determinant. To facilitate this the Schwinger proper time representation of this determinant is decomposed into a sum of three terms. The advantage of this is that the separate terms can be nonperturbatively estimated for a measurable class of large amplitude random fields in four dimensions. It is found that the QED fermion determinant grows faster than exp [c e2∫d4x Fμν 2] , c >0 , in the absence of zero mode supporting random background potentials. This raises doubt on whether the QED fermion determinant is integrable with any Gaussian measure whose support does not include zero mode supporting potentials. Including zero mode supporting background potentials can result in a decaying exponential growth of the fermion determinant. This is prima facie evidence that Maxwellian zero modes are necessary for the nonperturbative quantization of QED and, by implication, for the nonperturbative quantization of the electroweak model.

  18. HCN and CN in Comet 2P/Encke: Models of the non-isotropic, rotation-modulated coma and CN parent life time

    NASA Astrophysics Data System (ADS)

    Jockers, K.; Szutowicz, S.; Villanueva, G.; Bonev, T.; Hartogh, P.

    2011-09-01

    Axisymmetric models of the outgassing of a cometary nucleus have been constructed. Such models can be used to describe a nucleus with a single active region. The models may include a solar zenith angle dependence of the outgassing. They retrieve the outgassing flux at distances from the nucleus where collisions between molecules are unimportant, as function of the angle with respect to the outgassing axis. The observed emissions must be optically thin. Furthermore the models assume that the outflow speed at large distance from the nucleus does not depend on direction. The value of the outflow speed is retrieved. The models are applied to CN images and HCN spectra of Comet 2P/Encke, obtained nearly simultaneously in November 2003 with the 2 m optical telescope on Mount Rozhen, Bulgaria, and with the 10 m Heinrich Hertz Submillimeter Telescope on Mount Graham, Arizona, USA. According to Sekanina (1988), Astron. J. 95, 911-924, at that time a single outgassing source was active. Input parameters to the models like the rotation period of the nucleus and a small correction to Sekanina's rotation axis are determined from a simpler jet position angle model. The rotation is prograde with a sideric period of 11.056 ± 0.024 h, in agreement with literature values. The best fit model has an outflow speed of 0.95 ± 0.04 km s -1. The same value has been derived from the corkscrew appearing in the CN images. The location of the outgassing axis is at colatitude δa = 7.4° ± 2.9° and longitude λa = 235° ± 17° (a definition of zero longitude is provided). Comet Encke's outgassing corresponds approximately to the longitudinally averaged solar input on a spherical nucleus (i.e. very likely comes from deeper layers) but with some deficiency of outgassing at mid-latitudes and non-zero outgassing from the dark polar cap. The presence of gas flow from the dark polar cap is explained as evidence of gas flow across the terminator. The models rely mostly on the CN images. The HCN spectra are more noisy. They provide information how to determine the best fit outflow velocity and the sense of rotation. The model HCN spectra are distinctly non-Gaussian. Within error limits they are consistent with the observations. Models based solely on the HCN spectra are also presented but, because of the lower quality of the data and the unfavorable observing geometry, yield inferior results. As a by-product we determine the CN parent life time from our CN observations. The solar EUV and Ly α radiation field at the time of our observations is taken into account.

  19. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  20. Influence of the starting temperature of calorimetric measurements on the accuracy of determined magnetocaloric effect

    DOE PAGES

    Moreno-Ramirez, L. M.; Franco, V.; Conde, A.; ...

    2018-02-27

    Availability of a restricted heat capacity data range has a clear influence on the accuracy of calculated magnetocaloric effect, as confirmed by both numerical simulations and experimental measurements. Simulations using the Bean-Rodbell model show that, in general, the approximated magnetocaloric effect curves calculated using a linear extrapolation of the data starting from a selected temperature point down to zero kelvin deviate in a non-monotonic way from those correctly calculated by fully integrating the data from near zero temperatures. However, we discovered that a particular temperature range exists where the approximated magnetocaloric calculation provides the same result as the fully integratedmore » one. These specific truncated intervals exist for both first and second order phase transitions and are the same for the adiabatic temperature change and magnetic entropy change curves. Here, the effect of this truncated integration in real samples was confirmed using heat capacity data of Gd metal and Gd 5Si 2Ge 2 compound measured from near zero temperatures.« less

  1. The effects of diffusion in hot subdwarf progenitors from the common envelope channel

    NASA Astrophysics Data System (ADS)

    Byrne, Conor M.; Jeffery, C. Simon; Tout, Christopher A.; Hu, Haili

    2018-04-01

    Diffusion of elements in the atmosphere and envelope of a star can drastically alter its surface composition, leading to extreme chemical peculiarities. We consider the case of hot subdwarfs, where surface helium abundances range from practically zero to almost 100 percent. Since hot subdwarfs can form via a number of different evolution channels, a key question concerns how the formation mechanism is connected to the present surface chemistry. A sequence of extreme horizontal branch star models was generated by producing post-common envelope stars from red giants. Evolution was computed with MESA from envelope ejection up to core-helium ignition. Surface abundances were calculated at the zero-age horizontal branch for models with and without diffusion. A number of simulations also included radiative levitation. The goal was to study surface chemistry during evolution from cool giant to hot subdwarf and determine when the characteristic subdwarf surface is established. Only stars leaving the giant branch close to core-helium ignition become hydrogen-rich subdwarfs at the zero-age horizontal branch. Diffusion, including radiative levitation, depletes the initial surface helium in all cases. All subdwarf models rapidly become more depleted than observations allow. Surface abundances of other elements follow observed trends in general, but not in detail. Additional physics is required.

  2. A Simple Treatment of the Liquidity Trap for Intermediate Macroeconomics Courses

    ERIC Educational Resources Information Center

    Buttet, Sebastien; Roy, Udayan

    2014-01-01

    Several leading undergraduate intermediate macroeconomics textbooks now include a simple reduced-form New Keynesian model of short-run dynamics (alongside the IS-LM model). Unfortunately, there is no accompanying description of how the zero lower bound on nominal interest rates affects the model. In this article, the authors show how the…

  3. On stable exponential cosmological solutions with non-static volume factor in the Einstein-Gauss-Bonnet model

    NASA Astrophysics Data System (ADS)

    Ivashchuk, V. D.; Ernazarov, K. K.

    2017-01-01

    A (n + 1)-dimensional gravitational model with cosmological constant and Gauss-Bonnet term is studied. The ansatz with diagonal cosmological metrics is adopted and solutions with exponential dependence of scale factors: ai ˜ exp (vit), i = 1, …, n, are considered. The stability analysis of the solutions with non-static volume factor is presented. We show that the solutions with v 1 = v 2 = v 3 = H > 0 and small enough variation of the effective gravitational constant G are stable if certain restriction on (vi ) is obeyed. New examples of stable exponential solutions with zero variation of G in dimensions D = 1 + m + 2 with m > 2 are presented.

  4. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method.

    PubMed

    Norris, Peter M; da Silva, Arlindo M

    2016-07-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  5. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    NASA Technical Reports Server (NTRS)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  6. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    PubMed Central

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  7. Prediction of Transitional Flows in the Low Pressure Turbine

    NASA Technical Reports Server (NTRS)

    Huang, George; Xiong, Guohua

    1998-01-01

    Current turbulence models tend to give too early and too short a length of flow transition to turbulence, and hence fail to predict flow separation induced by the adverse pressure gradients and streamline flow curvatures. Our discussion will focus on the development and validation of transition models. The baseline data for model comparisons are the T3 series, which include a range of free-stream turbulence intensity and cover zero-pressure gradient to aft-loaded turbine pressure gradient flows. The method will be based on the conditioned N-S equations and a transport equation for the intermittency factor. First, several of the most popular 2-equation models in predicting flow transition are examined: k-e [Launder-Sharina], k-w [Wilcox], Lien-Leschiziner and SST [Menter] models. All models fail to predict the onset and the length of transition, even for the simplest flat plate with zero-pressure gradient(T3A). Although the predicted onset position of transition can be varied by providing different inlet turbulent energy dissipation rates, the appropriate inlet conditions for turbulence quantities should be adjusted to match the decay of the free-stream turbulence. Arguably, one may adjust the low-Reynolds-number part of the model to predict transition. This approach has so far not been very successful. However, we have found that the low-Reynolds-number model of Launder and Sharma [1974], which is an improved version of Jones and Launder [1972] gave the best overall performance. The Launder and Sharma model was designed to capture flow re-laminarization (a reverse of flow transition), but tends to give rise to a too early and too fast transition in comparison with the physical transition. The three test cases were for flows with zero pressure gradient but with different free-stream turbulent intensities. The same can be said about the model when considering flows subject to pressure gradient(T3C1). To capture the effects of transition using existing turbulence models, one approach is to make use of the concept of the intermittency to predict the flow transition. It was originally based on the intermittency distribution of Narasimha [1957], and then gradually evolved into a transport equation for the intermittency factor. Gostelow and associates [1994,1995] have made some improvements to Narasimha's method in an attempt to account for both favorable and adverse pressure gradients. Their approach is based on a linear, explicit combination of laminar and turbulent solutions. This approach fails to predict the overshoot of the skin friction on a flat plate near the end of transition zone, even though the length of transition is well predicted. The major flaw of Gostelow's approach is that it assumes the non-turbulent part being the laminar solution and the turbulent part being the turbulent solution and they do not interact across the transitional region. The technique in condition averaging the flow equations in intermittent flows was first introduced by Libby [1975] and Dopazo [1977] and further refined by Dick and associates [1988, 1996]. This approach employs two sets of transport equations for the non-turbulent part and the other for the turbulent part. The advantage of this approach is that it allows the interaction of non-turbulent and turbulent velocities through the introduction of additional source terms in the continuity and momentum equations for the non-turbulent and turbulent velocities. However, the strong coupling of the two sets of equations has caused some numerical difficulties, which requires special attention. The prediction of the skin friction can be improved by this approach via the implicit coupling of non-turbulent and turbulent velocity flelds. Another improvement of the interrmittency model can be further made by allowing the intermittency to vary in the cross-stream direction. This is one step prior to testing any proposal for the transport equation for the intermittency factor. Instead of solving the transport equation for the intermittency factor, the distribution for the intermittency factor is prescribed by Klebanoff's empirical formula [1955]. The skin friction is very well predicted by this new modification, including the overshoot of the profile near the end of the transition zone. The outcome of this study is very encouraging since it indicates that the proper description of the intermittency distribution is the key to the success of the model prediction. This study will be used to guide us on the modelling of the intermittency transport equation.

  8. A low dimensional dynamical system for the wall layer

    NASA Technical Reports Server (NTRS)

    Aubry, N.; Keefe, L. R.

    1987-01-01

    Low dimensional dynamical systems which model a fully developed turbulent wall layer were derived.The model is based on the optimally fast convergent proper orthogonal decomposition, or Karhunen-Loeve expansion. This decomposition provides a set of eigenfunctions which are derived from the autocorrelation tensor at zero time lag. Via Galerkin projection, low dimensional sets of ordinary differential equations in time, for the coefficients of the expansion, were derived from the Navier-Stokes equations. The energy loss to the unresolved modes was modeled by an eddy viscosity representation, analogous to Heisenberg's spectral model. A set of eigenfunctions and eigenvalues were obtained from direct numerical simulation of a plane channel at a Reynolds number of 6600, based on the mean centerline velocity and the channel width flow and compared with previous work done by Herzog. Using the new eigenvalues and eigenfunctions, a new ten dimensional set of ordinary differential equations were derived using five non-zero cross-stream Fourier modes with a periodic length of 377 wall units. The dynamical system was integrated for a range of the eddy viscosity prameter alpha. This work is encouraging.

  9. Deformations of vector-scalar models

    NASA Astrophysics Data System (ADS)

    Barnich, Glenn; Boulanger, Nicolas; Henneaux, Marc; Julia, Bernard; Lekeu, Victor; Ranjbar, Arash

    2018-02-01

    Abelian vector fields non-minimally coupled to uncharged scalar fields arise in many contexts. We investigate here through algebraic methods their consistent deformations ("gaugings"), i.e., the deformations that preserve the number (but not necessarily the form or the algebra) of the gauge symmetries. Infinitesimal consistent deformations are given by the BRST cohomology classes at ghost number zero. We parametrize explicitly these classes in terms of various types of global symmetries and corresponding Noether currents through the characteristic cohomology related to antifields and equations of motion. The analysis applies to all ghost numbers and not just ghost number zero. We also provide a systematic discussion of the linear and quadratic constraints on these parameters that follow from higher-order consistency. Our work is relevant to the gaugings of extended supergravities.

  10. Importance of non-flow in mixed-harmonic multi-particle correlations in small collision systems

    NASA Astrophysics Data System (ADS)

    Huo, Peng; Gajdošová, Katarína; Jia, Jiangyong; Zhou, You

    2018-02-01

    Recently CMS Collaboration measured mixed-harmonic four-particle azimuthal correlations, known as symmetric cumulants SC (n , m), in pp and p+Pb collisions, and interpreted the non-zero SC (n , m) as evidence for long-range collectivity in these small collision systems. Using the PYTHIA and HIJING models which do not have genuine long-range collectivity, we show that the CMS results, obtained with standard cumulant method, could be dominated by non-flow effects associated with jet and dijets, especially in pp collisions. We show that the non-flow effects are largely suppressed using the recently proposed subevent cumulant methods by requiring azimuthal correlation between two or more pseudorapidity ranges. We argue that the reanalysis of SC (n , m) using the subevent method in experiments is necessary before they can used to provide further evidences for a long-range multi-particle collectivity and constraints on theoretical models in small collision systems.

  11. Robustness against non-magnetic impurities in topological superconductors

    NASA Astrophysics Data System (ADS)

    Nagai, Y.; Ota, Y.; Machida, M.

    2014-12-01

    We study the robustness against non-magnetic impurities in a three-dimensional topological superconductor, focusing on an effective model (massive Dirac Bogoliubov-de Gennes (BdG) Hamiltonian with s-wave on-site pairing) of CuxBi2Se3 with the parameter set determined by the first-principles calculation. With the use of the self-consistent T- matrix approximation for impurity scattering, we discuss the impurity-concentration dependence of the zero-energy density of states. We show that a single material variable, measuring relativistic effects in the Dirac-BdG Hamiltonian, well characterizes the numerical results. In the nonrelativistic limit, the odd-parity fully-gapped topological superconductivity is fragile against non-magnetic impurities, since this superconductivity can be mapped onto the p-wave superconductivity. On the other hand, in the ultrarelativistic limit, the superconductivity is robust against the non-magnetic impurities, since the effective model has the s-wave superconductivity. We derive the effective Hamiltonian in the both limit.

  12. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) Giardia lamblia zero (2) Viruses zero (3) Legionella zero (4) Total coliforms (including fecal) zero coliforms and Escherichia coli (5) Cryptosporidium zero (6) Escherichia coli (E. coli) zero (b) The MCLG...

  13. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) Giardia lamblia zero (2) Viruses zero (3) Legionella zero (4) Total coliforms (including fecal) zero coliforms and Escherichia coli (5) Cryptosporidium zero (6) Escherichia coli (E. coli) zero (b) The MCLG...

  14. How to Explain the Non-Zero Mass of Electromagnetic Radiation Consisting of Zero-Mass Photons

    ERIC Educational Resources Information Center

    Gabovich, Alexander M.; Gabovich, Nadezhda A.

    2007-01-01

    The mass of electromagnetic radiation in a cavity is considered using the correct relativistic approach based on the concept of a scalar mass not dependent on the particle (system) velocity. It is shown that due to the non-additivity of mass in the special theory of relativity the ensemble of chaotically propagating mass-less photons in the cavity…

  15. A Family of Ellipse Methods for Solving Non-Linear Equations

    ERIC Educational Resources Information Center

    Gupta, K. C.; Kanwar, V.; Kumar, Sanjeev

    2009-01-01

    This note presents a method for the numerical approximation of simple zeros of a non-linear equation in one variable. In order to do so, the method uses an ellipse rather than a tangent approach. The main advantage of our method is that it does not fail even if the derivative of the function is either zero or very small in the vicinity of the…

  16. Spacecraft Data Simulator for the test of level zero processing systems

    NASA Technical Reports Server (NTRS)

    Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem

    1994-01-01

    The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.

  17. Classifying next-generation sequencing data using a zero-inflated Poisson model.

    PubMed

    Zhou, Yan; Wan, Xiang; Zhang, Baoxue; Tong, Tiejun

    2018-04-15

    With the development of high-throughput techniques, RNA-sequencing (RNA-seq) is becoming increasingly popular as an alternative for gene expression analysis, such as RNAs profiling and classification. Identifying which type of diseases a new patient belongs to with RNA-seq data has been recognized as a vital problem in medical research. As RNA-seq data are discrete, statistical methods developed for classifying microarray data cannot be readily applied for RNA-seq data classification. Witten proposed a Poisson linear discriminant analysis (PLDA) to classify the RNA-seq data in 2011. Note, however, that the count datasets are frequently characterized by excess zeros in real RNA-seq or microRNA sequence data (i.e. when the sequence depth is not enough or small RNAs with the length of 18-30 nucleotides). Therefore, it is desired to develop a new model to analyze RNA-seq data with an excess of zeros. In this paper, we propose a Zero-Inflated Poisson Logistic Discriminant Analysis (ZIPLDA) for RNA-seq data with an excess of zeros. The new method assumes that the data are from a mixture of two distributions: one is a point mass at zero, and the other follows a Poisson distribution. We then consider a logistic relation between the probability of observing zeros and the mean of the genes and the sequencing depth in the model. Simulation studies show that the proposed method performs better than, or at least as well as, the existing methods in a wide range of settings. Two real datasets including a breast cancer RNA-seq dataset and a microRNA-seq dataset are also analyzed, and they coincide with the simulation results that our proposed method outperforms the existing competitors. The software is available at http://www.math.hkbu.edu.hk/∼tongt. xwan@comp.hkbu.edu.hk or tongt@hkbu.edu.hk. Supplementary data are available at Bioinformatics online.

  18. The Impact of Competing Time Delays in Stochastic Coordination Problems

    NASA Astrophysics Data System (ADS)

    Korniss, G.; Hunt, D.; Szymanski, B. K.

    2011-03-01

    Coordinating, distributing, and balancing resources in coupled systems is a complex task as these operations are very sensitive to time delays. Delays are present in most real communication and information systems, including info-social and neuro-biological networks, and can be attributed to both non-zero transmission times between different units of the system and to non-zero times it takes to process the information and execute the desired action at the individual units. Here, we investigate the importance and impact of these two types of delays in a simple coordination (synchronization) problem in a noisy environment. We establish the scaling theory for the phase boundary of synchronization and for the steady-state fluctuations in the synchronizable regime. Further, we provide the asymptotic behavior near the boundary of the synchronizable regime. Our results also imply the potential for optimization and trade-offs in stochastic synchronization and coordination problems with time delays. Supported in part by DTRA, ARL, and ONR.

  19. Effective group index of refraction in non-thermal plasma photonic crystals

    NASA Astrophysics Data System (ADS)

    Mousavi, A.; Sadegzadeh, S.

    2015-11-01

    Plasma photonic crystals (PPCs) are periodic arrays that consist of alternate layers of micro-plasma and dielectric. These structures are used to control the propagation of electromagnetic waves. This paper presents a survey of research on the effect of non-thermal plasma with bi-Maxwellian distribution function on one dimensional PPC. A plasma with temperature anisotropy is not in thermodynamic equilibrium and can be described by the bi-Maxwellian distribution function. By using Kronig-Penny's model, the dispersion relation of electromagnetic modes in one dimensional non-thermal PPC (NPPC) is derived. The band structure, group velocity vg, and effective group index of refraction neff(g) of such NPPC structure with TeO2 as the material of dielectric layers have been studied. The concept of negative group velocity and negative neff(g), which indicates an anomalous behaviour of the PPCs, are also observed in the NPPC structures. Our numerical results provide confirmatory evidence that unlike PPCs there are finite group velocity and non-zero effective group indexes of refraction in photonic band gaps (PBGs) that lie in certain ranges of normalized frequency. In other words, inside the PBGs of NPPCs, neff(g) becomes non-zero and photons travel with a finite group velocity. In this special case, this velocity varies alternately between 20c and negative values of the order 103c (c is the speed of light in vacuum).

  20. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants.

    PubMed

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-04-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.

  1. A statistical simulation model for field testing of non-target organisms in environmental risk assessment of genetically modified plants

    PubMed Central

    Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore

    2014-01-01

    Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325

  2. Designing and Constructing an Exemplar Zero Carbon Primary School in the City of Exeter, United Kingdom

    ERIC Educational Resources Information Center

    Tatchell, Arthur

    2012-01-01

    The United Kingdom's (UK) 2008 Budget announced the government's ambition that all new non-domestic buildings should be zero carbon from 2016. In order to take this goal forward, the Department for Children, Schools and Families (DCSF) established the Zero Carbon Task Force (ZCTF); its objective was to advise on how England can achieve this…

  3. Thermodynamic models for bounding pressurant mass requirements of cryogenic tanks

    NASA Technical Reports Server (NTRS)

    Vandresar, Neil T.; Haberbusch, Mark S.

    1994-01-01

    Thermodynamic models have been formulated to predict lower and upper bounds for the mass of pressurant gas required to pressurize a cryogenic tank and then expel liquid from the tank. Limiting conditions are based on either thermal equilibrium or zero energy exchange between the pressurant gas and initial tank contents. The models are independent of gravity level and allow specification of autogenous or non-condensible pressurants. Partial liquid fill levels may be specified for initial and final conditions. Model predictions are shown to successfully bound results from limited normal-gravity tests with condensable and non-condensable pressurant gases. Representative maximum collapse factor maps are presented for liquid hydrogen to show the effects of initial and final fill level on the range of pressurant gas requirements. Maximum collapse factors occur for partial expulsions with large final liquid fill fractions.

  4. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  5. 40 CFR 60.2780 - What must I include in the deviation report?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Compliance Times for Commercial and Industrial Solid Waste Incineration Units Model Rule-Recordkeeping and... downtime associated with zero, span, and other routine calibration checks). (f) Whether each deviation...

  6. Electroweak baryogenesis, electric dipole moments, and Higgs diphoton decays

    DOE PAGES

    Chao, Wei; Ramsey-Musolf, Michael J.

    2014-10-30

    Here, we study the viability of electroweak baryogenesis in a two Higgs doublet model scenario augmented by vector-like, electroweakly interacting fermions. Considering a limited, but illustrative region of the model parameter space, we obtain the observed cosmic baryon asymmetry while satisfying present constraints from the non-observation of the permanent electric dipole moment (EDM) of the electron and the combined ATLAS and CMS result for the Higgs boson diphoton decay rate. The observation of a non-zero electron EDM in a next generation experiment and/or the observation of an excess (over the Standard Model) of Higgs to diphoton events with the 14more » TeV LHC run or a future e +e – collider would be consistent with generation of the observed baryon asymmetry in this scenario.« less

  7. The Green’s functions for peridynamic non-local diffusion

    PubMed Central

    Wang, L. J.; Xu, J. F.

    2016-01-01

    In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658

  8. Solid-propellant rocket motor internal ballistics performance variation analysis, phase 5

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Murph, J. E.

    1980-01-01

    The results of research aimed at improving the predictability of internal ballistics performance of solid-propellant rocket motors (SRM's) including thrust imbalance between two SRM's firing in parallel are presented. Static test data from the first six Space Shuttle SRM's is analyzed using a computer program previously developed for this purpose. The program permits intentional minor design biases affecting the imbalance between any two SMR's to be removed. Results for the last four of the six SRM's, with only the propellant bulk temperature as a non-random variable, are generally within limits predicted by theory. Extended studies of internal ballistic performance of single SRM's are presented based on an earlier developed mathematical model which includes an assessment of grain deformation. The erosive burning rate law used in the model is upgraded and made more general. Excellent results are obtained in predictions of the performances of five different SRM's of quite different sizes and configurations. These SRM's all employ PBAN type propellants with ammonium perchlorate oxidizer and 16 to 20% aluminum except one which uses carboxyl terminated butadiene binder. The only non-calculated parameters in the burning rate equations that are changed for the different SRM's are the zero crossflow velocity burning rate coefficients and exponents. The results, in general, confirm the importance of grain deformation. The improved internal ballistic model makes practical development of an effective computer program for application of an optimization technique to SRM design which is also demonstrated. The program uses a pattern search technique to minimize the difference between a desired thrust-time trace and one calculated based on the internal ballistic model.

  9. Better Than Nothing: A Rational Approach for Minimizing the Impact of Outflow Strategy on Cerebrovascular Simulations.

    PubMed

    Chnafa, C; Brina, O; Pereira, V M; Steinman, D A

    2018-02-01

    Computational fluid dynamics simulations of neurovascular diseases are impacted by various modeling assumptions and uncertainties, including outlet boundary conditions. Many studies of intracranial aneurysms, for example, assume zero pressure at all outlets, often the default ("do-nothing") strategy, with no physiological basis. Others divide outflow according to the outlet diameters cubed, nominally based on the more physiological Murray's law but still susceptible to subjective choices about the segmented model extent. Here we demonstrate the limitations and impact of these outflow strategies, against a novel "splitting" method introduced here. With our method, the segmented lumen is split into its constituent bifurcations, where flow divisions are estimated locally using a power law. Together these provide the global outflow rate boundary conditions. The impact of outflow strategy on flow rates was tested for 70 cases of MCA aneurysm with 0D simulations. The impact on hemodynamic indices used for rupture status assessment was tested for 10 cases with 3D simulations. Differences in flow rates among the various strategies were up to 70%, with a non-negligible impact on average and oscillatory wall shear stresses in some cases. Murray-law and splitting methods gave flow rates closest to physiological values reported in the literature; however, only the splitting method was insensitive to arbitrary truncation of the model extent. Cerebrovascular simulations can depend strongly on the outflow strategy. The default zero-pressure method should be avoided in favor of Murray-law or splitting methods, the latter being released as an open-source tool to encourage the standardization of outflow strategies. © 2018 by American Journal of Neuroradiology.

  10. Effect of rotation zero-crossing on single-fluid plasma response to three-dimensional magnetic perturbations

    NASA Astrophysics Data System (ADS)

    Lyons, B. C.; Ferraro, N. M.; Paz-Soldan, C.; Nazikian, R.; Wingen, A.

    2017-04-01

    In order to understand the effect of rotation on the response of a plasma to three-dimensional magnetic perturbations, we perform a systematic scan of the zero-crossing of the rotation profile in a DIII-D ITER-similar shape equilibrium using linear, time-independent modeling with the M3D-C1 extended magnetohydrodynamics code. We confirm that the local resonant magnetic field generally increases as the rotation decreases at a rational surface. Multiple peaks in the resonant field are observed near rational surfaces, however, and the maximum resonant field does not always correspond to zero rotation at the surface. Furthermore, we show that non-resonant current can be driven at zero-crossings not aligned with rational surfaces if there is sufficient shear in the rotation profile there, leading to amplification of near-resonant Fourier harmonics of the perturbed magnetic field and a decrease in the far-off-resonant harmonics. The quasilinear electromagnetic torque induced by this non-resonant plasma response provides drive to flatten the rotation, possibly allowing for increased transport in the pedestal by the destabilization of turbulent modes. In addition, this torque acts to drive the rotation zero-crossing to dynamically stable points near rational surfaces, which would allow for increased resonant penetration. By one or both of these mechanisms, this torque may play an important role in bifurcations into suppression of edge-localized modes. Finally, we discuss how these changes to the plasma response could be detected by tokamak diagnostics. In particular, we show that the changes to the resonant field discussed here have a significant impact on the external perturbed magnetic field, which should be observable by magnetic sensors on the high-field side of tokamaks but not on the low-field side. In addition, TRIP3D-MAFOT simulations show that none of the changes to the plasma response described here substantially affects the divertor footprint structure.

  11. Effect of rotation zero-crossing on single-fluid plasma response to three-dimensional magnetic perturbations

    DOE PAGES

    Lyons, Brendan C.; Ferraro, Nathaniel M.; Paz-Soldan, Carlos A.; ...

    2017-02-14

    In order to understand the effect of rotation on the plasma's response to three-dimensional magnetic perturbations, we perform a systematic scan of the zero-crossing of the rotation profile in a DIII-D ITER-similar shape equilibrium using linear, time-independent modeling with the M3D-C1 extended magnetohydrodynamics code. We confirm that the local resonant magnetic field generally increases as the rotation decreases at a rational surface. Multiple peaks in the resonant field are observed near rational surfaces, however, and the maximum resonant field does not always correspond to zero rotation at the surface. Furthermore, we show that non-resonant current can be driven at zero-more » crossings not aligned with rational surfaces if there is sufficient shear in the rotation profile there, leading to an amplification of near-resonant Fourier harmonics of the perturbed magnetic field and a decrease in the far-off -resonant harmonics. The quasilinear electromagnetic torque induced by this non-resonant plasma response provides drive to flatten the rotation, possibly allowing for increased transport in the pedestal by the destabilization of turbulent modes. In addition, this torque acts to drive the rotation zero-crossing to dynamically stable points near rational surfaces, which would allow for increased resonant penetration. By one or both of these mechanisms, this torque may play an important role in bifurcations into ELM suppression. Finally, we discuss how these changes to the plasma response could be detected by tokamak diagnostics. In particular, we show that the changes to the resonant field discussed here have a significant impact on the external perturbed magnetic field, which should be observable by magnetic sensors on the high-field side of tokamaks, but not on the low-field side. In addition, TRIP3D-MAFOT simulations show that none of the changes to the plasma response described here substantially affects the divertor footprint structure.« less

  12. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  13. Non-linear transfer characteristics of stimulation and recording hardware account for spurious low-frequency artifacts during amplitude modulated transcranial alternating current stimulation (AM-tACS).

    PubMed

    Kasten, Florian H; Negahbani, Ehsan; Fröhlich, Flavio; Herrmann, Christoph S

    2018-05-31

    Amplitude modulated transcranial alternating current stimulation (AM-tACS) has been recently proposed as a possible solution to overcome the pronounced stimulation artifact encountered when recording brain activity during tACS. In theory, AM-tACS does not entail power at its modulating frequency, thus avoiding the problem of spectral overlap between brain signal of interest and stimulation artifact. However, the current study demonstrates how weak non-linear transfer characteristics inherent to stimulation and recording hardware can reintroduce spurious artifacts at the modulation frequency. The input-output transfer functions (TFs) of different stimulation setups were measured. Setups included recordings of signal-generator and stimulator outputs and M/EEG phantom measurements. 6 th -degree polynomial regression models were fitted to model the input-output TFs of each setup. The resulting TF models were applied to digitally generated AM-tACS signals to predict the frequency of spurious artifacts in the spectrum. All four setups measured for the study exhibited low-frequency artifacts at the modulation frequency and its harmonics when recording AM-tACS. Fitted TF models showed non-linear contributions significantly different from zero (all p < .05) and successfully predicted the frequency of artifacts observed in AM-signal recordings. Results suggest that even weak non-linearities of stimulation and recording hardware can lead to spurious artifacts at the modulation frequency and its harmonics. These artifacts were substantially larger than alpha-oscillations of a human subject in the MEG. Findings emphasize the need for more linear stimulation devices for AM-tACS and careful analysis procedures, taking into account low-frequency artifacts to avoid confusion with effects of AM-tACS on the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Evolving concepts on adjusting human resting energy expenditure measurements for body size.

    PubMed

    Heymsfield, S B; Thomas, D; Bosy-Westphal, A; Shen, W; Peterson, C M; Müller, M J

    2012-11-01

    Establishing if an adult's resting energy expenditure (REE) is high or low for their body size is a pervasive question in nutrition research. Early workers applied body mass and height as size measures and formulated the Surface Law and Kleiber's Law, although each has limitations when adjusting REE. Body composition methods introduced during the mid-20th century provided a new opportunity to identify metabolically homogeneous 'active' compartments. These compartments all show improved correlations with REE estimates over body mass-height approaches, but collectively share a common limitation: REE-body composition ratios are not 'constant' but vary across men and women and with race, age and body size. The now-accepted alternative to ratio-based norms is to adjust for predictors by applying regression models to calculate 'residuals' that establish if an REE is relatively high or low. The distinguishing feature of statistical REE-body composition models is a 'non-zero' intercept of unknown origin. The recent introduction of imaging methods has allowed development of physiological tissue-organ-based REE prediction models. Herein, we apply these imaging methods to provide a mechanistic explanation, supported by experimental data, for the non-zero intercept phenomenon and, in that context, propose future research directions for establishing between-subject differences in relative energy metabolism. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.

  15. A new method to include the gravitational forces in a finite element model of the scoliotic spine.

    PubMed

    Clin, Julien; Aubin, Carl-Éric; Lalonde, Nadine; Parent, Stefan; Labelle, Hubert

    2011-08-01

    The distribution of stresses in the scoliotic spine is still not well known despite its biomechanical importance in the pathomechanisms and treatment of scoliosis. Gravitational forces are one of the sources of these stresses. Existing finite element models (FEMs), when considering gravity, applied these forces on a geometry acquired from radiographs while the patient was already subjected to gravity, which resulted in a deformed spine different from the actual one. A new method to include gravitational forces on a scoliotic trunk FEM and compute the stresses in the spine was consequently developed. The 3D geometry of three scoliotic patients was acquired using a multi-view X-ray 3D reconstruction technique and surface topography. The FEM of the patients' trunk was created using this geometry. A simulation process was developed to apply the gravitational forces at the centers of gravity of each vertebra level. First the "zero-gravity" geometry was determined by applying adequate upwards forces on the initial geometry. The stresses were reset to zero and then the gravity forces were applied to compute the geometry of the spine subjected to gravity. An optimization process was necessary to find the appropriate zero-gravity and gravity geometries. The design variables were the forces applied on the model to find the zero-gravity geometry. After optimization the difference between the vertebral positions acquired from radiographs and the vertebral positions simulated with the model was inferior to 3 mm. The forces and compressive stresses in the scoliotic spine were then computed. There was an asymmetrical load in the coronal plane, particularly, at the apices of the scoliotic curves. Difference of mean compressive stresses between concavity and convexity of the scoliotic curves ranged between 0.1 and 0.2 MPa. In conclusion, a realistic way of integrating gravity in a scoliotic trunk FEM was developed and stresses due to gravity were explicitly computed. This is a valuable improvement for further biomechanical modeling studies of scoliosis.

  16. Occultation Lightcurves for Selected Pluto Volatile Transport Models

    NASA Astrophysics Data System (ADS)

    Young, L. A.

    2004-11-01

    The stellar occultations by Pluto in 1988 and 2002 are demonstrably sensitive to changes in Pluto's atmosphere near one microbar (Elliot and Young 1992, AJ 103, 991; Elliot et al. 2003, Nature 424, 165; Sicardy 2003, Nature 424, 168). However, Pluto volatile-transport models focus on the changes in the atmospheric pressure at the surface (e.g., Hansen and Paige 1996, Icarus 20, 247; Stansberry and Yelle 1999, Icarus 141, 299). What's lacking is a connection between predictions about the surface properties and either temperature and pressure profiles measurable from stellar occultations, or the occultation light curve morphology itself. Radiative-conductive models can illuminate this connection. I will illustrate how Pluto's changing surface pressure, temperature, and heliocentric distance may affect occultation light curves for a selection of existing volatile transport models. Changes in the light curve include the presence or absence of an observable ``kink'' (or departure from an isothermal light curve), the appearance of non-zero minimum flux levels, and the detectability of the solid surface. These light curves can serve as examples of what we may anticipate during the upcoming Pluto occultation season, as Pluto crosses the galactic plane.

  17. What can we learn from the dynamics of entanglement and quantum discord in the Tavis-Cummings model?

    NASA Astrophysics Data System (ADS)

    Restrepo, Juliana; Rodriguez, Boris A.

    We revisit the problem of the dynamics of quantum correlations in the exact Tavis-Cummings model. We show that many of the dynamical features of quantum discord attributed to dissipation are already present in the exact framework and are due to the well known non-linearities in the model and to the choice of initial conditions. Through a comprehensive analysis, supported by explicit analytical calculations, we find that the dynamics of entanglement and quantum discord are far from being trivial or intuitive. In this context, we find states that are indistinguishable from the point of view of entanglement and distinguishable from the point of view of quantum discord, states where the two quantifiers give opposite information and states where they give roughly the same information about correlations at a certain time. Depending on the initial conditions, this model exhibits a fascinating range of phenomena that can be used for experimental purposes such as: Robust states against change of manifold or dissipation, tunable entanglement states and states with a counterintuitive sudden birth as the number of photons increase. We furthermore propose an experiment called quantum discord gates where discord is zero or non-zero depending on the number of photons. This work was supported by the Vicerrectoria de Investigacion of the Universidad Antonio Narino, Colombia under Project Number 20141031 and by the Departamento Administrativo de Ciencia, Tecnologia e Innovacion (COLCIENCIAS) of Colombia under Grant Number.

  18. Hydrostatic Chandra X-ray analysis of SPT-selected galaxy clusters - I. Evolution of profiles and core properties

    NASA Astrophysics Data System (ADS)

    Sanders, J. S.; Fabian, A. C.; Russell, H. R.; Walker, S. A.

    2018-02-01

    We analyse Chandra X-ray Observatory observations of a set of galaxy clusters selected by the South Pole Telescope using a new publicly available forward-modelling projection code, MBPROJ2, assuming hydrostatic equilibrium. By fitting a power law plus constant entropy model we find no evidence for a central entropy floor in the lowest entropy systems. A model of the underlying central entropy distribution shows a narrow peak close to zero entropy which accounts for 60 per cent of the systems, and a second broader peak around 130 keV cm2. We look for evolution over the 0.28-1.2 redshift range of the sample in density, pressure, entropy and cooling time at 0.015R500 and at 10 kpc radius. By modelling the evolution of the central quantities with a simple model, we find no evidence for a non-zero slope with redshift. In addition, a non-parametric sliding median shows no significant change. The fraction of cool-core clusters with central cooling times below 2 Gyr is consistent above and below z = 0.6 (˜30-40 per cent). Both by comparing the median thermodynamic profiles, centrally biased towards cool cores, in two redshift bins, and by modelling the evolution of the unbiased average profile as a function of redshift, we find no significant evolution beyond self-similar scaling in any of our examined quantities. Our average modelled radial density, entropy and cooling-time profiles appear as power laws with breaks around 0.2R500. The dispersion in these quantities rises inwards of this radius to around 0.4 dex, although some of this scatter can be fitted by a bimodal model.

  19. A hidden analytic structure of the Rabi model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moroz, Alexander, E-mail: wavescattering@yahoo.com

    2014-01-15

    The Rabi model describes the simplest interaction between a cavity mode with a frequency ω{sub c} and a two-level system with a resonance frequency ω{sub 0}. It is shown here that the spectrum of the Rabi model coincides with the support of the discrete Stieltjes integral measure in the orthogonality relations of recently introduced orthogonal polynomials. The exactly solvable limit of the Rabi model corresponding to Δ=ω{sub 0}/(2ω{sub c})=0, which describes a displaced harmonic oscillator, is characterized by the discrete Charlier polynomials in normalized energy ϵ, which are orthogonal on an equidistant lattice. A non-zero value of Δ leads tomore » non-classical discrete orthogonal polynomials ϕ{sub k}(ϵ) and induces a deformation of the underlying equidistant lattice. The results provide a basis for a novel analytic method of solving the Rabi model. The number of ca. 1350 calculable energy levels per parity subspace obtained in double precision (cca 16 digits) by an elementary stepping algorithm is up to two orders of magnitude higher than is possible to obtain by Braak’s solution. Any first n eigenvalues of the Rabi model arranged in increasing order can be determined as zeros of ϕ{sub N}(ϵ) of at least the degree N=n+n{sub t}. The value of n{sub t}>0, which is slowly increasing with n, depends on the required precision. For instance, n{sub t}≃26 for n=1000 and dimensionless interaction constant κ=0.2, if double precision is required. Given that the sequence of the lth zeros x{sub nl}’s of ϕ{sub n}(ϵ)’s defines a monotonically decreasing discrete flow with increasing n, the Rabi model is indistinguishable from an algebraically solvable model in any finite precision. Although we can rigorously prove our results only for dimensionless interaction constant κ<1, numerics and exactly solvable example suggest that the main conclusions remain to be valid also for κ≥1. -- Highlights: •A significantly simplified analytic solution of the Rabi model. •The spectrum is the lattice of discrete orthogonal polynomials. •Up to 1350 levels in double precision can be obtained for a given parity. •Omission of any level can be easily detected.« less

  20. Inflationary predictions of double-well, Coleman-Weinberg, and hilltop potentials with non-minimal coupling

    NASA Astrophysics Data System (ADS)

    Bostan, Nilay; Güleryüz, Ömer; Nefer Şenoğuz, Vedat

    2018-05-01

    We discuss how the non-minimal coupling ξphi2R between the inflaton and the Ricci scalar affects the predictions of single field inflation models where the inflaton has a non-zero vacuum expectation value (VEV) v after inflation. We show that, for inflaton values both above the VEV and below the VEV during inflation, under certain conditions the inflationary predictions become approximately the same as the predictions of the Starobinsky model. We then analyze inflation with double-well and Coleman-Weinberg potentials in detail, displaying the regions in the v-ξ plane for which the spectral index ns and the tensor-to-scalar ratio r values are compatible with the current observations. r is always larger than 0.002 in these regions. Finally, we consider the effect of ξ on small field inflation (hilltop) potentials.

  1. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  2. Stable Spheromaks with Profile Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fowler, T K; Jayakumar, R

    A spheromak equilibrium with zero edge current is shown to be stable to both ideal MHD and tearing modes that normally produce Taylor relaxation in gun-injected spheromaks. This stable equilibrium differs from the stable Taylor state in that the current density j falls to zero at the wall. Estimates indicate that this current profile could be sustained by non-inductive current drive at acceptable power levels. Stability is determined using the NIMROD code for linear stability analysis. Non-linear NIMROD calculations with non-inductive current drive could point the way to improved fusion reactors.

  3. DETECTING EXOMOONS AROUND SELF-LUMINOUS GIANT EXOPLANETS THROUGH POLARIZATION.

    PubMed

    Sengupta, Sujan; Marley, Mark S

    2016-01-01

    Many of the directly imaged self-luminous gas giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time resolved linear polarization signal. The peak amplitude of such time dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity which are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1 and 0.3 % in the infrared.

  4. Measurement of the target-normal single-spin asymmetry in quasielastic scattering from the reaction He 3 ↑ ( e , e ' )

    DOE PAGES

    Zhang, Y. -W.; Long, E.; Mihovilovič, M.; ...

    2015-10-22

    We report the first measurement of the target single-spin asymmetry, Ay, in quasi-elastic scattering from the inclusive reaction 3He↑ (e,e') on a 3He gas target polarized normal to the lepton scattering plane. Assuming time-reversal invariance, this asymmetry is strictly zero for one-photon exchange. A non-zero A y can arise from the interference between the one- and two-photon exchange processes which is sensitive to the details of the sub-structure of the nucleon. An experiment recently completed at Jefferson Lab yielded asymmetries with high statistical precision at Q 2= 0.13, 0.46 and 0.97 GeV 2. These measurements demonstrate, for the first time,more » that the 3He asymmetry is clearly non-zero and negative with a statistical significance of (8-10)σ. Using measured proton-to- 3He cross-section ratios and the effective polarization approximation, neutron asymmetries of -(1-3)% were obtained. The neutron asymmetry at high Q 2 is related to moments of the Generalized Parton Distributions (GPDs). Our measured neutron asymmetry at Q 2=0.97 GeV 2 agrees well with a prediction based on two-photon exchange using a GPD model and in addition provides a new independent constraint on these distributions.« less

  5. DETECTING EXOMOONS AROUND SELF-LUMINOUS GIANT EXOPLANETS THROUGH POLARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sengupta, Sujan; Marley, Mark S., E-mail: sujan@iiap.res.in, E-mail: Mark.S.Marley@NASA.gov

    Many of the directly imaged self-luminous gas-giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk-averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk-averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with a cloudy atmosphere along the line of sight, the asymmetrymore » induced during the transit should give rise to a net non-zero, time-resolved linear polarization signal. The peak amplitude of such time-dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time-resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity that are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during the transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1% and 0.3% in the infrared.« less

  6. Detecting Exomoons Around Self-Luminous Giant Exoplanets Through Polarization

    NASA Technical Reports Server (NTRS)

    Sengupta, Sujan; Marley, Mark Scott

    2016-01-01

    Many of the directly imaged self-luminous gas giant exoplanets have been found to have cloudy atmo- spheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time resolved linear polarization signal. The peak amplitude of such time dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity which are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during transit phase and estimate the peak amplitude of polarization that occurs during the the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1 and 0.3 % in the infrared.

  7. DETECTING EXOMOONS AROUND SELF-LUMINOUS GIANT EXOPLANETS THROUGH POLARIZATION

    PubMed Central

    Sengupta, Sujan; Marley, Mark S.

    2017-01-01

    Many of the directly imaged self-luminous gas giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time resolved linear polarization signal. The peak amplitude of such time dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity which are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1 and 0.3 % in the infrared. PMID:29430024

  8. Sub-micrometer epsilon-near-zero electroabsorption modulators enabled by high-mobility cadmium oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Wood, Michael; Serkland, Darwin K.

    Here, epsilon-near-zero materials provide a new path for tailoring light-matter interactions at the nanoscale. In this paper, we analyze a compact electroabsorption modulator based on epsilon-near-zero confinement in transparent conducting oxide films. The non-resonant modulator operates through field-effect carrier density tuning. We compare the performance of modulators composed of two different conducting oxides, namely indium oxide (In2O3) and cadmium oxide (CdO), and show that better modulation performance is achieved when using high-mobility (i.e. low-loss) epsilon-near-zero materials such as CdO. In particular, we show that non-resonant electroabsorption modulators with sub-micron lengths and greater than 5 dB extinction ratios may be achievedmore » through the proper selection of high-mobility transparent conducting oxides, opening a path for device miniaturization and increased modulation depth.« less

  9. Sub-micrometer epsilon-near-zero electroabsorption modulators enabled by high-mobility cadmium oxide

    DOE PAGES

    Campione, Salvatore; Wood, Michael; Serkland, Darwin K.; ...

    2017-07-06

    Here, epsilon-near-zero materials provide a new path for tailoring light-matter interactions at the nanoscale. In this paper, we analyze a compact electroabsorption modulator based on epsilon-near-zero confinement in transparent conducting oxide films. The non-resonant modulator operates through field-effect carrier density tuning. We compare the performance of modulators composed of two different conducting oxides, namely indium oxide (In2O3) and cadmium oxide (CdO), and show that better modulation performance is achieved when using high-mobility (i.e. low-loss) epsilon-near-zero materials such as CdO. In particular, we show that non-resonant electroabsorption modulators with sub-micron lengths and greater than 5 dB extinction ratios may be achievedmore » through the proper selection of high-mobility transparent conducting oxides, opening a path for device miniaturization and increased modulation depth.« less

  10. Growth Curve Models for Zero-Inflated Count Data: An Application to Smoking Behavior

    ERIC Educational Resources Information Center

    Liu, Hui; Powers, Daniel A.

    2007-01-01

    This article applies growth curve models to longitudinal count data characterized by an excess of zero counts. We discuss a zero-inflated Poisson regression model for longitudinal data in which the impact of covariates on the initial counts and the rate of change in counts over time is the focus of inference. Basic growth curve models using a…

  11. Kondo-like zero-bias conductance anomaly in a three-dimensional topological insulator nanowire

    DOE PAGES

    Cho, Sungjae; Zhong, Ruidan; Schneeloch, John A.; ...

    2016-02-25

    Zero-bias anomalies in topological nanowires have recently captured significant attention, as they are possible signatures of Majorana modes. Yet there are many other possible origins of zero-bias peaks in nanowires—for example, weak localization, Andreev bound states, or the Kondo effect. Here, we discuss observations of differential-conductance peaks at zero-bias voltage in non-superconducting electronic transport through a 3D topological insulator (Bi 1.33Sb 0.67)Se 3 nanowire. The zero-bias conductance peaks show logarithmic temperature dependence and often linear splitting with magnetic fields, both of which are signatures of the Kondo effect in quantum dots. As a result, we characterize the zero-bias peaks andmore » discuss their origin.« less

  12. Scaled lattice fermion fields, stability bounds, and regularity

    NASA Astrophysics Data System (ADS)

    O'Carroll, Michael; Faria da Veiga, Paulo A.

    2018-02-01

    We consider locally gauge-invariant lattice quantum field theory models with locally scaled Wilson-Fermi fields in d = 1, 2, 3, 4 spacetime dimensions. The use of scaled fermions preserves Osterwalder-Seiler positivity and the spectral content of the models (the decay rates of correlations are unchanged in the infinite lattice). In addition, it also results in less singular, more regular behavior in the continuum limit. Precisely, we treat general fermionic gauge and purely fermionic lattice models in an imaginary-time functional integral formulation. Starting with a hypercubic finite lattice Λ ⊂(aZ ) d, a ∈ (0, 1], and considering the partition function of non-Abelian and Abelian gauge models (the free fermion case is included) neglecting the pure gauge interactions, we obtain stability bounds uniformly in the lattice spacing a ∈ (0, 1]. These bounds imply, at least in the subsequential sense, the existence of the thermodynamic (Λ ↗ (aZ ) d) and the continuum (a ↘ 0) limits. Specializing to the U(1) gauge group, the known non-intersecting loop expansion for the d = 2 partition function is extended to d = 3 and the thermodynamic limit of the free energy is shown to exist with a bound independent of a ∈ (0, 1]. In the case of scaled free Fermi fields (corresponding to a trivial gauge group with only the identity element), spectral representations are obtained for the partition function, free energy, and correlations. The thermodynamic and continuum limits of the free fermion free energy are shown to exist. The thermodynamic limit of n-point correlations also exist with bounds independent of the point locations and a ∈ (0, 1], and with no n! dependence. Also, a time-zero Hilbert-Fock space is constructed, as well as time-zero, spatially pointwise scaled fermion creation operators which are shown to be norm bounded uniformly in a ∈ (0, 1]. The use of our scaled fields since the beginning allows us to extract and isolate the singularities of the free energy when a ↘ 0.

  13. Statistical methods for meta-analyses including information from studies without any events-add nothing to nothing and succeed nevertheless.

    PubMed

    Kuss, O

    2015-03-30

    Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Exploiting Data Similarity to Reduce Memory Footprints

    DTIC Science & Technology

    2011-01-01

    leslie3d Fortran Computational Fluid Dynamics (CFD) application 122. tachyon C Parallel Ray Tracing application 128.GAPgeofem C and Fortran Simulates...benefits most from SBLLmalloc; LAMMPS, which shows moderate similarity from primarily zero pages; and 122. tachyon , a parallel ray- tracing application...similarity across MPI tasks. They primarily are zero- pages although a small fraction (≈10%) are non-zero pages. 122. tachyon is an image rendering

  15. Method and apparatus for enhancing microchannel plate data

    DOEpatents

    Thoe, R.S.

    1983-10-24

    A method and apparatus for determining centroid channel locations are disclosed for use in a system activated by one or more multichannel plates and including a linear diode array providing channels of information 1, 2, ...,n, ..., N containing signal amplitudes A/sub n/. A source of analog A/sub n/ signals, and a source of digital clock signals n, are provided. Non-zero A/sub n/ values are detected in a discriminator. A digital signal representing p, the value of n immediately preceding that whereat A/sub n/ takes its first non-zero value, is generated in a scaler. The analog A/sub n/ signals are converted to digital in an analog to digital converter. The digital A/sub n/ signals are added to produce a digital ..sigma..A/sub n/ signal in a full adder. Digital 1, 2, ..., m signals representing the number of non-zero A/sub n/ are produced by a discriminator pulse counter. Digital signals representing 1 A/sub p+1/, 2 A/sub p+2/, ..., m A/sub p+m/ are produced by pairwise multiplication in multiplier. These signal are added in multiplier summer to produce a digital ..sigma..nA/sub n/ - p..sigma..A/sub n/ signal. This signal is divided by the digital ..sigma..A/sub n/ signal in divider to provide a digital (..sigma..nA/sub n//..sigma..A/sub n/) -p signal. Finally, this last signal is added to the digital p signal in an offset summer to provide ..sigma..nA/sub n//..sigma..A/sub n/, the centroid channel locations.

  16. Quantum hall ferromagnets

    NASA Astrophysics Data System (ADS)

    Kumar, Akshay

    We study several quantum phases that are related to the quantum Hall effect. Our initial focus is on a pair of quantum Hall ferromagnets where the quantum Hall ordering occurs simultaneously with a spontaneous breaking of an internal symmetry associated with a semiconductor valley index. In our first example ---AlAs heterostructures--- we study domain wall structure, role of random-field disorder and dipole moment physics. Then in the second example ---Si(111)--- we show that symmetry breaking near several integer filling fractions involves a combination of selection by thermal fluctuations known as "order by disorder" and a selection by the energetics of Skyrme lattices induced by moving away from the commensurate fillings, a mechanism we term "order by doping". We also study ground state of such systems near filling factor one in the absence of valley Zeeman energy. We show that even though the lowest energy charged excitations are charge one skyrmions, the lowest energy skyrmion lattice has charge > 1 per unit cell. We then broaden our discussion to include lattice systems having multiple Chern number bands. We find analogs of quantum Hall ferromagnets in the menagerie of fractional Chern insulator phases. Unlike in the AlAs system, here the domain walls come naturally with gapped electronic excitations. We close with a result involving only topology: we show that ABC stacked multilayer graphene placed on boron nitride substrate has flat bands with non-zero local Berry curvature but zero Chern number. This allows access to an interaction dominated system with a non-trivial quantum distance metric but without the extra complication of a non-zero Chern number.

  17. Charging in the ac Conductance of a Double Barrier Resonant Tunneling Structure

    NASA Technical Reports Server (NTRS)

    Anantram, M. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    There have been many studies of the linear response ac conductance of a double barrier resonant tunneling structure (DBRTS), both at zero and finite dc biases. While these studies are important, they fail to self consistently include the effect of the time dependent charge density in the well. In this paper, we calculate the ac conductance at both zero and finite do biases by including the effect of the time dependent charge density in the well in a self consistent manner. The charge density in the well contributes to both the flow of displacement currents in the contacts and the time dependent potential in the well. We find that including these effects can make a significant difference to the ac conductance and the total ac current is not equal to the simple average of the non-selfconsistently calculated conduction currents in the two contacts. This is illustrated by comparing the results obtained with and without the effect of the time dependent charge density included correctly. Some possible experimental scenarios to observe these effects are suggested.

  18. Alchemical and structural distribution based representation for universal quantum machine learning

    NASA Astrophysics Data System (ADS)

    Faber, Felix A.; Christensen, Anders S.; Huang, Bing; von Lilienfeld, O. Anatole

    2018-06-01

    We introduce a representation of any atom in any chemical environment for the automatized generation of universal kernel ridge regression-based quantum machine learning (QML) models of electronic properties, trained throughout chemical compound space. The representation is based on Gaussian distribution functions, scaled by power laws and explicitly accounting for structural as well as elemental degrees of freedom. The elemental components help us to lower the QML model's learning curve, and, through interpolation across the periodic table, even enable "alchemical extrapolation" to covalent bonding between elements not part of training. This point is demonstrated for the prediction of covalent binding in single, double, and triple bonds among main-group elements as well as for atomization energies in organic molecules. We present numerical evidence that resulting QML energy models, after training on a few thousand random training instances, reach chemical accuracy for out-of-sample compounds. Compound datasets studied include thousands of structurally and compositionally diverse organic molecules, non-covalently bonded protein side-chains, (H2O)40-clusters, and crystalline solids. Learning curves for QML models also indicate competitive predictive power for various other electronic ground state properties of organic molecules, calculated with hybrid density functional theory, including polarizability, heat-capacity, HOMO-LUMO eigenvalues and gap, zero point vibrational energy, dipole moment, and highest vibrational fundamental frequency.

  19. Quantum ratchet effect in a time non-uniform double-kicked model

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Wang, Zhen-Yu; Hui, Wu; Chu, Cheng-Yu; Chai, Ji-Min; Xiao, Jin; Zhao, Yu; Ma, Jin-Xiang

    2017-07-01

    The quantum ratchet effect means that the directed transport emerges in a quantum system without a net force. The delta-kicked model is a quantum Hamiltonian model for the quantum ratchet effect. This paper investigates the quantum ratchet effect based on a time non-uniform double-kicked model, in which two flashing potentials alternately act on a particle with a homogeneous initial state of zero momentum, while the intervals between adjacent actions are not equal. The evolution equation of the state of the particle is derived from its Schrödinger equation, and the numerical method to solve the evolution equation is pointed out. The results show that quantum resonances can induce the ratchet effect in this time non-uniform double-kicked model under certain conditions; some quantum resonances, which cannot induce the ratchet effect in previous models, can induce the ratchet effect in this model, and the strengths of the ratchet effect in this model are stronger than those in previous models under certain conditions. These results enrich people’s understanding of the delta-kicked model, and provides a new optional scheme to control the quantum transport of cold atoms in experiment.

  20. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  1. Zero dimensional model of atmospheric SMD discharge and afterglow in humid air

    NASA Astrophysics Data System (ADS)

    Smith, Ryan; Kemaneci, Efe; Offerhaus, Bjoern; Stapelmann, Katharina; Peter Brinkmann, Ralph

    2016-09-01

    A novel mesh-like Surface Micro Discharge (SMD) device designed for surface wound treatment is simulated by multiple time-scaled zero-dimensional models. The chemical dynamics of the discharge are resolved in time at atmospheric pressure in humid conditions. Simulated are the particle densities of electrons, 26 ionic species, and 26 reactive neutral species including: O3, NO, and HNO3. The total of 53 described species are constrained by 624 reactions within the simulated plasma discharge volume. The neutral species are allowed to diffuse into a diffusive gas regime which is of primary interest. Two interdependent zero-dimensional models separated by nine orders of magnitude in temporal resolution are used to accomplish this; thereby reducing the computational load. Through variation of control parameters such as: ignition frequency, deposited power density, duty cycle, humidity level, and N2 content, the ideal operation conditions for the SMD device can be predicted. The described model has been verified by matching simulation parameters and comparing results to that of previous works. Current operating conditions of the experimental mesh-like SMD were matched and results are compared to the simulations. Work supported by SFB TR 87.

  2. Analytical YORP torques model with an improved temperature distribution function

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Vokrouhlický, D.; Nesvorný, D.

    2010-01-01

    Previous models of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect relied either on the zero thermal conductivity assumption, or on the solutions of the heat conduction equations assuming an infinite body size. We present the first YORP solution accounting for a finite size and non-radial direction of the surface normal vectors in the temperature distribution. The new thermal model implies the dependence of the YORP effect in rotation rate on asteroids conductivity. It is shown that the effect on small objects does not scale as the inverse square of diameter, but rather as the first power of the inverse.

  3. Adaptation of a zero-dimensional cylinder pressure model for diesel engines using the crankshaft rotational speed

    NASA Astrophysics Data System (ADS)

    Weißenborn, E.; Bossmeyer, T.; Bertram, T.

    2011-08-01

    Tighter emission regulations are driving the development of advanced engine control strategies relying on feedback information from the combustion chamber. In this context, it is especially seeked for alternatives to expensive in-cylinder pressure sensors. The present study addresses these issues by pursuing a simulation-based approach. It focuses on the extension of an empirical, zero-dimensional cylinder pressure model using the engine speed signal in order to detect cylinder-wise variations in combustion. As a special feature, only information available from the standard sensor configuration are utilized. Within the study, different methods for the model-based reconstruction of the combustion pressure including nonlinear Kalman filtering are compared. As a result, the accuracy of the cylinder pressure model can be enhanced. At the same time, the inevitable limitations of the proposed methods are outlined.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Ootegem, Luc; SHERPPA — Ghent University; Verhofstadt, Elsy

    Depth–damage-functions, relating the monetary flood damage to the depth of the inundation, are commonly used in the case of fluvial floods (floods caused by a river overflowing). We construct four multivariate damage models for pluvial floods (caused by extreme rainfall) by differentiating on the one hand between ground floor floods and basement floods and on the other hand between damage to residential buildings and damage to housing contents. We do not only take into account the effect of flood-depth on damage, but also incorporate the effects of non-hazard indicators (building characteristics, behavioural indicators and socio-economic variables). By using a Tobit-estimationmore » technique on identified victims of pluvial floods in Flanders (Belgium), we take into account the effect of cases of reported zero damage. Our results show that the flood depth is an important predictor of damage, but with a diverging impact between ground floor floods and basement floods. Also non-hazard indicators are important. For example being aware of the risk just before the water enters the building reduces content damage considerably, underlining the importance of warning systems and policy in this case of pluvial floods. - Highlights: • Prediction of damage of pluvial floods using also non-hazard information • We include ‘no damage cases’ using a Tobit model. • The damage of flood depth is stronger for ground floor than for basement floods. • Non-hazard indicators are especially important for content damage. • Potential gain of policies that increase awareness of flood risks.« less

  5. Stepwise formation of H3O(+)(H2O)n in an ion drift tube: Empirical effective temperature of association/dissociation reaction equilibrium in an electric field.

    PubMed

    Nakai, Yoichi; Hidaka, Hiroshi; Watanabe, Naoki; Kojima, Takao M

    2016-06-14

    We measured equilibrium constants for H3O(+)(H2O)n-1 + H2O↔H3O(+)(H2O)n (n = 4-9) reactions taking place in an ion drift tube with various applied electric fields at gas temperatures of 238-330 K. The zero-field reaction equilibrium constants were determined by extrapolation of those obtained at non-zero electric fields. From the zero-field reaction equilibrium constants, the standard enthalpy and entropy changes, ΔHn,n-1 (0) and ΔSn,n-1 (0), of stepwise association for n = 4-8 were derived and were in reasonable agreement with those measured in previous studies. We also examined the electric field dependence of the reaction equilibrium constants at non-zero electric fields for n = 4-8. An effective temperature for the reaction equilibrium constants at non-zero electric field was empirically obtained using a parameter describing the electric field dependence of the reaction equilibrium constants. Furthermore, the size dependence of the parameter was thought to reflect the evolution of the hydrogen-bond structure of H3O(+)(H2O)n with the cluster size. The reflection of structural information in the electric field dependence of the reaction equilibria is particularly noteworthy.

  6. Extending the Applicability of the Generalized Likelihood Function for Zero-Inflated Data Series

    NASA Astrophysics Data System (ADS)

    Oliveira, Debora Y.; Chaffe, Pedro L. B.; Sá, João. H. M.

    2018-03-01

    Proper uncertainty estimation for data series with a high proportion of zero and near zero observations has been a challenge in hydrologic studies. This technical note proposes a modification to the Generalized Likelihood function that accounts for zero inflation of the error distribution (ZI-GL). We compare the performance of the proposed ZI-GL with the original Generalized Likelihood function using the entire data series (GL) and by simply suppressing zero observations (GLy>0). These approaches were applied to two interception modeling examples characterized by data series with a significant number of zeros. The ZI-GL produced better uncertainty ranges than the GL as measured by the precision, reliability and volumetric bias metrics. The comparison between ZI-GL and GLy>0 highlights the need for further improvement in the treatment of residuals from near zero simulations when a linear heteroscedastic error model is considered. Aside from the interception modeling examples illustrated herein, the proposed ZI-GL may be useful for other hydrologic studies, such as for the modeling of the runoff generation in hillslopes and ephemeral catchments.

  7. Hierarchies from D-brane instantons in globally defined calabi-yau orientifolds

    DOE PAGES

    Cvetič, Mirjam; Weigand, Timo

    2008-06-01

    We construct the first globally consistent semi-realistic Type I string vacua on an elliptically fibered manifold where the zero modes of the Euclidean D1-instanton sector allow for the generation of non-perturbative Majorana masses of an intermediate scale. In another class of global models, a D1-brane instanton can generate a Polonyi-type superpotential breaking supersymmetry at an exponentially suppressed scale.

  8. Time-domain simulation of constitutive relations for nonlinear acoustics including relaxation for frequency power law attenuation media modeling

    NASA Astrophysics Data System (ADS)

    Jiménez, Noé; Camarena, Francisco; Redondo, Javier; Sánchez-Morcillo, Víctor; Konofagou, Elisa E.

    2015-10-01

    We report a numerical method for solving the constitutive relations of nonlinear acoustics, where multiple relaxation processes are included in a generalized formulation that allows the time-domain numerical solution by an explicit finite differences scheme. Thus, the proposed physical model overcomes the limitations of the one-way Khokhlov-Zabolotskaya-Kuznetsov (KZK) type models and, due to the Lagrangian density is implicitly included in the calculation, the proposed method also overcomes the limitations of Westervelt equation in complex configurations for medical ultrasound. In order to model frequency power law attenuation and dispersion, such as observed in biological media, the relaxation parameters are fitted to both exact frequency power law attenuation/dispersion media and also empirically measured attenuation of a variety of tissues that does not fit an exact power law. Finally, a computational technique based on artificial relaxation is included to correct the non-negligible numerical dispersion of the finite difference scheme, and, on the other hand, improve stability trough artificial attenuation when shock waves are present. This technique avoids the use of high-order finite-differences schemes leading to fast calculations. The present algorithm is especially suited for practical configuration where spatial discontinuities are present in the domain (e.g. axisymmetric domains or zero normal velocity boundary conditions in general). The accuracy of the method is discussed by comparing the proposed simulation solutions to one dimensional analytical and k-space numerical solutions.

  9. Heat Models of Asteroids and the YORP Effect

    NASA Astrophysics Data System (ADS)

    Golubov, O.

    The Yarkovsky-O'Keefe-Radzievski-Paddack (YORP) effect is a torque of light pressure recoil forces acting on an asteroid. We show how this torque can be expressed as an integral of a universal function over the surface of an asteroid, and discuss generalizations of this expression for the case of non-Lambert's scattering laws, non-convex shapes of asteroids, and non-zero heat conductivity. Then we discuss tangential YORP (TYORP), which appears due to uneven heat conductivity in stones lying on the surface of an asteroid. TYORP manifests itself as a drag, which pulls the surface in the tangential direction. Finally, we discuss relation and interplay between the normal YORP and the tangential YORP.

  10. Kinetic modelling of non-enzymatic browning and changes of physio-chemical parameters of peach juice during storage.

    PubMed

    Lyu, Jian; Liu, Xuan; Bi, Jinfeng; Wu, Xinye; Zhou, Linyan; Ruan, Weihong; Zhou, Mo; Jiao, Yi

    2018-03-01

    Kinetics of non-enzymatic browning and loss of free amino acids during different storage temperature (4, 25, 37 °C) were investigated. Changes of browning degree ( A 420 ), color parameters, Vitamin C ( V c ), free amino acids and 5-hydroxymethylfurfural (5-HMF) were analyzed to evaluate the non-enzymatic browning reactions, which were significantly affected by storage temperature. The lower temperature (4 °C) decreased the loss of V c and the generation of 5-HMF, but induce the highest loss of serine. At the end of storage, loss of serine, alanine and aspartic acid were mainly lost. Results showed that zero-order kinetic model ( R 2  > 0.859), the first-order model ( R 2  > 0.926) and the combined kinetic model ( R 2  > 0.916) were the most appropriate to describe the changes of a * and b * values, the degradation of V c and the changes of A 420 , L * and 5-HMF during different storage temperatures. These kinetic models can be applied for predicting and minimizing the non-enzymatic browning of fresh peach juice during storage.

  11. Mathematical Analysis of a Coarsening Model with Local Interactions

    NASA Astrophysics Data System (ADS)

    Helmers, Michael; Niethammer, Barbara; Velázquez, Juan J. L.

    2016-10-01

    We consider particles on a one-dimensional lattice whose evolution is governed by nearest-neighbor interactions where particles that have reached size zero are removed from the system. Concentrating on configurations with infinitely many particles, we prove existence of solutions under a reasonable density assumption on the initial data and show that the vanishing of particles and the localized interactions can lead to non-uniqueness. Moreover, we provide a rigorous upper coarsening estimate and discuss generic statistical properties as well as some non-generic behavior of the evolution by means of heuristic arguments and numerical observations.

  12. Avoiding revenue loss due to 'lesser of' contract clauses.

    PubMed

    Stodolak, Frederick; Gutierrez, Henry

    2014-08-01

    Finance managers seeking to avoid lost revenue attributable to lesser-of-charge-or-fixed-fee (lesser-of) clauses in their contracts should: Identify payer contracts that contain lesser-of clauses. Prepare lesser-of lost-revenue reports for non-bundled and bundled rates. For claims with covered charges below the bundled rate, identify service codes associated with the greatest proportion of total gross revenue and determine new, higher charge levels for those codes. Establish an approach for setting charges for non-bundled fee schedules to address lost-revenue-related issues. Incorporate changes into overall strategic or hospital zero-based pricing modeling and parameters.

  13. Elastic Gauge Fields in Weyl Semimetals

    NASA Astrophysics Data System (ADS)

    Cortijo, Alberto; Ferreiros, Yago; Landsteiner, Karl; Hernandez Vozmediano, Maria Angeles

    We show that, as it happens in graphene, elastic deformations couple to the electronic degrees of freedom as pseudo gauge fields in Weyl semimetals. We derive the form of the elastic gauge fields in a tight-binding model hosting Weyl nodes and see that this vector electron-phonon coupling is chiral, providing an example of axial gauge fields in three dimensions. As an example of the new response functions that arise associated to these elastic gauge fields, we derive a non-zero phonon Hall viscosity for the neutral system at zero temperature. The axial nature of the fields provides a test of the chiral anomaly in high energy with three axial vector couplings. European Union structural funds and the Comunidad de Madrid MAD2D-CM Program (S2013/MIT-3007).

  14. Zero temperature coefficient of resistance of the electrical-breakdown path in ultrathin hafnia

    NASA Astrophysics Data System (ADS)

    Zhang, H. Z.; Ang, D. S.

    2017-09-01

    The recent widespread attention on the use of the non-volatile resistance switching property of a microscopic oxide region after electrical breakdown for memory applications has prompted basic interest in the conduction properties of the breakdown region. Here, we report an interesting crossover from a negative to a positive temperature dependence of the resistance of a breakdown region in ultrathin hafnia as the applied voltage is increased. As a consequence, a near-zero temperature coefficient of resistance is obtained at the crossover voltage. The behavior may be modeled by (1) a tunneling-limited transport involving two farthest-spaced defects along the conduction path at low voltage and (2) a subsequent transition to a scattering-limited transport after the barrier is overcome by a larger applied voltage.

  15. Rigorous description of holograms of particles illuminated by an astigmatic elliptical Gaussian beam

    NASA Astrophysics Data System (ADS)

    Yuan, Y. J.; Ren, K. F.; Coëtmellec, S.; Lebrun, D.

    2009-02-01

    The digital holography is a non-intrusive optical metrology and well adapted for the measurement of the size and velocity field of particles in the spray of a fluid. The simplified model of an opaque disk is often used in the treatment of the diagrams and therefore the refraction and the third dimension diffraction of the particle are not taken into account. We present in this paper a rigorous description of the holographic diagrams and evaluate the effects of the refraction and the third dimension diffraction by comparison to the opaque disk model. It is found that the effects are important when the real part of the refractive index is near unity or the imaginary part is non zero but small.

  16. SEMIPARAMETRIC ZERO-INFLATED MODELING IN MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA)

    PubMed Central

    Liu, Hai; Ma, Shuangge; Kronmal, Richard; Chan, Kung-Sik

    2013-01-01

    We analyze the Agatston score of coronary artery calcium (CAC) from the Multi-Ethnic Study of Atherosclerosis (MESA) using semi-parametric zero-inflated modeling approach, where the observed CAC scores from this cohort consist of high frequency of zeroes and continuously distributed positive values. Both partially constrained and unconstrained models are considered to investigate the underlying biological processes of CAC development from zero to positive, and from small amount to large amount. Different from existing studies, a model selection procedure based on likelihood cross-validation is adopted to identify the optimal model, which is justified by comparative Monte Carlo studies. A shrinkaged version of cubic regression spline is used for model estimation and variable selection simultaneously. When applying the proposed methods to the MESA data analysis, we show that the two biological mechanisms influencing the initiation of CAC and the magnitude of CAC when it is positive are better characterized by an unconstrained zero-inflated normal model. Our results are significantly different from those in published studies, and may provide further insights into the biological mechanisms underlying CAC development in human. This highly flexible statistical framework can be applied to zero-inflated data analyses in other areas. PMID:23805172

  17. Labour market income inequality and mortality in North American metropolitan areas.

    PubMed

    Sanmartin, C; Ross, N A; Tremblay, S; Wolfson, M; Dunn, J R; Lynch, J

    2003-10-01

    To investigate relations between labour market income inequality and mortality in North American metropolitan areas. An ecological cross sectional study of relations between income inequality and working age (25-64 years) mortality in 53 Canadian (1991) and 282 US (1990) metropolitan areas using four measures of income inequality. Two labour market income concepts were used: labour market income for households with non-trivial attachment to the labour market and labour market income for all households, including those with zero and negative incomes. Relations were assessed with weighted and unweighted bivariate and multiple regression analyses. US metropolitan areas were more unequal than their Canadian counterparts, across inequality measures and income concepts. The association between labour market income inequality and working age mortality was robust in the US to both the inequality measure and income concept, but the association was inconsistent in Canada. Three of four inequality measures were significantly related to mortality in Canada when households with zero and negative incomes were included. In North American models, increases in earnings inequality were associated with hypothetical increases in working age mortality rates of between 23 and 33 deaths per 100 000, even after adjustment for median metropolitan incomes. This analysis of labour market inequality provides more evidence regarding the robust nature of the relation between income inequality and mortality in the US. It also provides a more refined understanding of the nature of the relation in Canada, pointing to the role of unemployment in generating Canadian metropolitan level health inequalities.

  18. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... lamblia zero (2) Viruses zero (3) Legionella zero (4) Total coliforms (including fecal coliforms and Escherichia coli) zero. (5) Cryptosporidium zero. [54 FR 27527, 27566, June 29, 1989; 55 FR 25064, June 19...

  19. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... lamblia zero (2) Viruses zero (3) Legionella zero (4) Total coliforms (including fecal coliforms and Escherichia coli) zero. (5) Cryptosporidium zero. [54 FR 27527, 27566, June 29, 1989; 55 FR 25064, June 19...

  20. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... lamblia zero (2) Viruses zero (3) Legionella zero (4) Total coliforms (including fecal coliforms and Escherichia coli) zero. (5) Cryptosporidium zero. [54 FR 27527, 27566, June 29, 1989; 55 FR 25064, June 19...

  1. Finite temperature m=0 upper-hybrid modes in a non-neutral plasma, theory and simulation.

    NASA Astrophysics Data System (ADS)

    Hart, Grant W.; Takeshi Nakata, M.; Spencer, Ross L.

    2007-11-01

    Axisymmetric upper-hybrid oscillations have been known to exist in non-neutral plasmas and FTICR/MS devices for a number of years^1,2. However, because they are electrostatic in nature and axisymmetric, they are self-shielding and therefore difficult to detect in long systems. Previous theoretical studies have assumed a zero temperature plasma. In the zero temperature limit these oscillations are not properly represented as a mode, because the frequency at a given radius depends only on the local density and is not coupled to neighboring radii, much like the zero temperature plasma oscillation. Finite temperature provides the coupling which links the oscillation into a coherent mode. We have analyzed the finite-temperature theory of these modes and find that they form an infinite set of modes with frequencies above 2̂c- 2̂p. For a constant density plasma the eigenmodes are Bessel functions. For a more general plasma the eigenmodes must be numerically calculated. We have simulated these modes in our r-θ particle-in-cell code that includes a full Lorentz-force mover^3 and find that the eigenmodes correspond well with the theory.^1 J.J. Bollinger, et al., Phys. Rev. A 48, 525 (1993).^2 S.E. Barlow, et al., Int. J. Mass Spectrom. Ion Processes 74, 97 (1986).^3 M. Takeshi Nakata, et al., Bull. Am. Phys. Soc. 51, 245 (2006).

  2. Finite temperature m=0 Bernstein modes in a non-neutral plasma, theory and simulation

    NASA Astrophysics Data System (ADS)

    Hart, Grant W.; Spencer, Ross L.; Takeshi Nakata, M.

    2008-11-01

    Axisymmetric upper-hybrid oscillations have been known to exist in non-neutral plasmas and FTICR/MS devices for a number of years. However, because they are electrostatic in nature and axisymmetric, they are self-shielding and therefore difficult to detect in long systems. Previous theoretical studies have assumed a zero temperature plasma. In the zero temperature limit these oscillations are not properly represented as a mode, because the frequency at a given radius depends only on the local density and is not coupled to neighboring radii, much like the zero temperature plasma oscillation. Finite temperature provides the coupling which links the oscillation into a coherent mode. We have analyzed the finite-temperature theory of these modes and find that they form an infinite set of modes with frequencies above 2̂c- 2̂p. We have simulated these modes in our r-θ particle-in-cell code that includes a full Lorentz-force mover and find that in a mostly flat-top plasma there are two eigenmodes that have essentially the same shape in the bulk of the plasma, but different frequencies. It appears likely that they have different boundary conditions in the boundary region. J.J. Bollinger, et al., Phys. Rev. A 48, 525 (1993). S.E. Barlow, et al., Int. J. Mass Spectrom. Ion Processes 74, 97 (1986). M. Takeshi Nakata, et al., Bull. Am. Phys. Soc. 51, 245 (2006).

  3. Design of fluidic self-assembly bonds for precise component positioning

    NASA Astrophysics Data System (ADS)

    Ramadoss, Vivek; Crane, Nathan B.

    2008-02-01

    Self Assembly is a promising alternative to conventional pick and place robotic assembly of micro components. Its benefits include parallel integration of parts with low equipment costs. Various approaches to self assembly have been demonstrated, yet demanding applications like assembly of micro-optical devices require increased positioning accuracy. This paper proposes a new method for design of self assembly bonds that addresses this need. Current methods have zero force at the desired assembly position and low stiffness. This allows small disturbance forces to create significant positioning errors. The proposed method uses a substrate assembly feature to provide a high accuracy alignment guide to the part. The capillary bond region of the part and substrate are then modified to create a non-zero positioning force to maintain the part in the desired assembly position. Capillary force models show that this force aligns the part to the substrate assembly feature and reduces sensitivity of part position to process variation. Thus, the new configuration can substantially improve positioning accuracy of capillary self-assembly. This will result in a dramatic decrease in positioning errors in the micro parts. Various binding site designs are analyzed and guidelines are proposed for the design of an effective assembly bond using this new approach.

  4. Trajectories of Delinquency from Age 14 to 23 in the National Longitudinal Survey of Youth Sample

    PubMed Central

    Murphy, Debra A.; Brecht, Mary-Lynn; Huang, David; Herbeck, Diane M.

    2012-01-01

    This study utilized data from the National Longitudinal Survey of Youth to investigate risk trajectories for delinquency and factors associated with different trajectories, particularly substance use. The sample (N = 8,984) was 49% female. A group-based trajectory model was applied, which identified four distinct trajectories for both males and females: (1) a High group with delinquency rates consistently higher than other groups, with some decrease across the age range; (2) a Decreased group, beginning at high levels with substantial decrease to near zero; (3) a Moderate group experiencing some decline but remaining at moderate rates of delinquency through most of the age range; and (4) a consistently Low group, having low rates of delinquency declining to near zero by mid- to late-teens. The Low group was distinguished by several protective factors, including higher rates of maternal authoritative parenting style, possible lower acculturation (higher rates of non-English spoken at home), higher rates of religious activity, later substance use initiation, lower rates of early delinquent activity, less early experience with neighborhood or personal violence, and higher rates of perceiving penalty for wrongdoing. Conversely, the High group was characterized by several vulnerability factors—essentially the converse of the protective factors above. PMID:23105164

  5. Recasting a model atomistic glassformer as a system of icosahedra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinney, Rhiannon; Bristol Centre for Complexity Science, University of Bristol, Bristol BS8 1TS; Liverpool, Tanniemola B.

    2015-12-28

    We consider a binary Lennard-Jones glassformer whose super-Arrhenius dynamics are correlated with the formation of icosahedral structures. Upon cooling, these icosahedra organize into mesoclusters. We recast this glassformer as an effective system of icosahedra which we describe with a population dynamics model. This model we parameterize with data from the temperature regime accessible to molecular dynamics simulations. We then use the model to determine the population of icosahedra in mesoclusters at arbitrary temperature. Using simulation data to incorporate dynamics into the model, we predict relaxation behavior at temperatures inaccessible to conventional approaches. Our model predicts super-Arrhenius dynamics whose relaxation timemore » remains finite for non-zero temperature.« less

  6. An Active Z Gravity Compensation System

    DTIC Science & Technology

    1992-07-01

    is necessary to convert the modified digital controller back into continuous time, assuming a zero -order hold for output, and using the Padd ...most likely higher frequency pole- zero pairs introduced by the motor and torque servo, these are generally non-oscillatory, and small in amplitude...on the output of the PI control. The detection scheme is the following: if the output of the fuzzy controller has remained zero (static system) for

  7. Large magnetoresistance dips and perfect spin-valley filter induced by topological phase transitions in silicene

    NASA Astrophysics Data System (ADS)

    Prarokijjak, Worasak; Soodchomshom, Bumned

    2018-04-01

    Spin-valley transport and magnetoresistance are investigated in silicene-based N/TB/N/TB/N junction where N and TB are normal silicene and topological barriers. The topological phase transitions in TB's are controlled by electric, exchange fields and circularly polarized light. As a result, we find that by applying electric and exchange fields, four groups of spin-valley currents are perfectly filtered, directly induced by topological phase transitions. Control of currents, carried by single, double and triple channels of spin-valley electrons in silicene junction, may be achievable by adjusting magnitudes of electric, exchange fields and circularly polarized light. We may identify that the key factor behind the spin-valley current filtered at the transition points may be due to zero and non-zero Chern numbers. Electrons that are allowed to transport at the transition points must obey zero-Chern number which is equivalent to zero mass and zero-Berry's curvature, while electrons with non-zero Chern number are perfectly suppressed. Very large magnetoresistance dips are found directly induced by topological phase transition points. Our study also discusses the effect of spin-valley dependent Hall conductivity at the transition points on ballistic transport and reveals the potential of silicene as a topological material for spin-valleytronics.

  8. Dynamics of quantum correlation and coherence for two atoms coupled with a bath of fluctuating massless scalar field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhiming, E-mail: 465609785@qq.com; Situ, Haozhen, E-mail: situhaozhen@gmail.com

    In this article, the dynamics of quantum correlation and coherence for two atoms interacting with a bath of fluctuating massless scalar field in the Minkowski vacuum is investigated. We firstly derive the master equation that describes the system evolution with initial Bell-diagonal state. Then we discuss the system evolution for three cases of different initial states: non-zero correlation separable state, maximally entangled state and zero correlation state. For non-zero correlation initial separable state, quantum correlation and coherence can be protected from vacuum fluctuations during long time evolution when the separation between the two atoms is relatively small. For maximally entangledmore » initial state, quantum correlation and coherence overall decrease with evolution time. However, for the zero correlation initial state, quantum correlation and coherence are firstly generated and then drop with evolution time; when separation is sufficiently small, they can survive from vacuum fluctuations. For three cases, quantum correlation and coherence first undergo decline and then fluctuate to relatively stable values with the increasing distance between the two atoms. Specially, for the case of zero correlation initial state, quantum correlation and coherence occur periodically revival at fixed zero points and revival amplitude declines gradually with increasing separation of two atoms.« less

  9. Unidimensional factor models imply weaker partial correlations than zero-order correlations.

    PubMed

    van Bork, Riet; Grasman, Raoul P P P; Waldorp, Lourens J

    2018-06-01

    In this paper we present a new implication of the unidimensional factor model. We prove that the partial correlation between two observed variables that load on one factor given any subset of other observed variables that load on this factor lies between zero and the zero-order correlation between these two observed variables. We implement this result in an empirical bootstrap test that rejects the unidimensional factor model when partial correlations are identified that are either stronger than the zero-order correlation or have a different sign than the zero-order correlation. We demonstrate the use of the test in an empirical data example with data consisting of fourteen items that measure extraversion.

  10. Statistical Models for the Analysis of Zero-Inflated Pain Intensity Numeric Rating Scale Data.

    PubMed

    Goulet, Joseph L; Buta, Eugenia; Bathulapalli, Harini; Gueorguieva, Ralitza; Brandt, Cynthia A

    2017-03-01

    Pain intensity is often measured in clinical and research settings using the 0 to 10 numeric rating scale (NRS). NRS scores are recorded as discrete values, and in some samples they may display a high proportion of zeroes and a right-skewed distribution. Despite this, statistical methods for normally distributed data are frequently used in the analysis of NRS data. We present results from an observational cross-sectional study examining the association of NRS scores with patient characteristics using data collected from a large cohort of 18,935 veterans in Department of Veterans Affairs care diagnosed with a potentially painful musculoskeletal disorder. The mean (variance) NRS pain was 3.0 (7.5), and 34% of patients reported no pain (NRS = 0). We compared the following statistical models for analyzing NRS scores: linear regression, generalized linear models (Poisson and negative binomial), zero-inflated and hurdle models for data with an excess of zeroes, and a cumulative logit model for ordinal data. We examined model fit, interpretability of results, and whether conclusions about the predictor effects changed across models. In this study, models that accommodate zero inflation provided a better fit than the other models. These models should be considered for the analysis of NRS data with a large proportion of zeroes. We examined and analyzed pain data from a large cohort of veterans with musculoskeletal disorders. We found that many reported no current pain on the NRS on the diagnosis date. We present several alternative statistical methods for the analysis of pain intensity data with a large proportion of zeroes. Published by Elsevier Inc.

  11. The Influence of the Enhanced Vector Meson Sector on the Properties of the Matter of Neutron Stars

    PubMed Central

    Bednarek, Ilona; Manka, Ryszard; Pienkos, Monika

    2014-01-01

    This paper gives an overview of the model of a neutron star with non-zero strangeness constructed within the framework of the nonlinear realization of the chiral symmetry. The emphasis is put on the physical properties of the matter of a neutron star as well as on its internal structure. The obtained solution is particularly aimed at the problem of the construction of a theoretical model of a neutron star matter with hyperons that will give high value of the maximum mass. PMID:25188304

  12. A data-drive analysis for heavy quark diffusion coefficient

    NASA Astrophysics Data System (ADS)

    Xu, Yingru; Nahrgang, Marlene; Cao, Shanshan; Bernhard, Jonah E.; Bass, Steffen A.

    2018-02-01

    We apply a Bayesian model-to-data analysis on an improved Langevin framework to estimate the temperature and momentum dependence of the heavy quark diffusion coefficient in the quark-gluon plasma (QGP). The spatial diffusion coefficient is found to have a minimum around 1-3 near Tc in the zero momentum limit, and has a non-trivial momentum dependence. With the estimated diffusion coefficient, our improved Langevin model is able to simultaneously describe the D-meson RAA and v2 in three different systems at RHIC and the LHC.

  13. A modified exponential behavioral economic demand model to better describe consumption data.

    PubMed

    Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K

    2015-12-01

    Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. Impact of aviation non-CO₂ combustion effects on the environmental feasibility of alternative jet fuels.

    PubMed

    Stratton, Russell W; Wolfe, Philip J; Hileman, James I

    2011-12-15

    Alternative fuels represent a potential option for reducing the climate impacts of the aviation sector. The climate impacts of alternatives fuel are traditionally considered as a ratio of life cycle greenhouse gas (GHG) emissions to those of the displaced petroleum product; however, this ignores the climate impacts of the non-CO(2) combustion effects from aircraft in the upper atmosphere. The results of this study show that including non-CO(2) combustion emissions and effects in the life cycle of a Synthetic Paraffinic Kerosene (SPK) fuel can lead to a decrease in the relative merit of the SPK fuel relative to conventional jet fuel. For example, an SPK fuel option with zero life cycle GHG emissions would offer a 100% reduction in GHG emissions but only a 48% reduction in actual climate impact using a 100-year time window and the nominal climate modeling assumption set outlined herein. Therefore, climate change mitigation policies for aviation that rely exclusively on relative well-to-wake life cycle GHG emissions as a proxy for aviation climate impact may overestimate the benefit of alternative fuel use on the global climate system.

  15. Non-Normality and Testing that a Correlation Equals Zero

    ERIC Educational Resources Information Center

    Levy, Kenneth J.

    1977-01-01

    The importance of the assumption of normality for testing that a bivariate normal correlation equals zero is examined. Both empirical and theoretical evidence suggest that such tests are robust with respect to violation of the normality assumption. (Author/JKS)

  16. Optimal Stand Management: Traditional and Neotraditional Solutions

    Treesearch

    Karen Lee Abt; Jeffrey P. Prestemon

    2003-01-01

    The traditional Faustmann (1849) model has served as the foundation of economic theory of the firm for the forestry production process. Since its introduction over 150 years ago, many variations of the Faustmann have been developed which relax certain assumptions of the traditional model, including constant prices, risk neutrality, zero production and management costs...

  17. The effects of the one-step replica symmetry breaking on the Sherrington-Kirkpatrick spin glass model in the presence of random field with a joint Gaussian probability density function for the exchange interactions and random fields

    NASA Astrophysics Data System (ADS)

    Hadjiagapiou, Ioannis A.; Velonakis, Ioannis N.

    2018-07-01

    The Sherrington-Kirkpatrick Ising spin glass model, in the presence of a random magnetic field, is investigated within the framework of the one-step replica symmetry breaking. The two random variables (exchange integral interaction Jij and random magnetic field hi) are drawn from a joint Gaussian probability density function characterized by a correlation coefficient ρ, assuming positive and negative values. The thermodynamic properties, the three different phase diagrams and system's parameters are computed with respect to the natural parameters of the joint Gaussian probability density function at non-zero and zero temperatures. The low temperature negative entropy controversy, a result of the replica symmetry approach, has been partly remedied in the current study, leading to a less negative result. In addition, the present system possesses two successive spin glass phase transitions with characteristic temperatures.

  18. Description, Analysis and Simulation of a New Realization of Digital Filters.

    DTIC Science & Technology

    1987-09-01

    together with its staircase representation h,.(t) . ..... .. ... ... .. 79 6.3 The-RDC LPF transfer function when Td includes 2 zeroes of hc(t) 81 6.4 The...RDC LPF transfer function when Td includes 6 zeroes of hc(t) 82 6.5 The RDC LPF transfer function when Td includes 8 zeroes of h,(t) 83 6.6 The RDC LPF...transfer function when Td includes 6 zeroes of h,(t) and when rectangular and Hamming windows are used ........ ... 84 6.7 The input z(t) and its

  19. From Zero Energy Buildings to Zero Energy Districts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polly, Ben; Kutscher, Chuck; Macumber, Dan

    Some U.S. cities are planning advanced districts that have goals for zero energy, water, waste, and/or greenhouse gas emissions. From an energy perspective, zero energy districts present unique opportunities to cost-effectively achieve high levels of energy efficiency and renewable energy penetration across a collection of buildings that may be infeasible at the individual building scale. These high levels of performance are accomplished through district energy systems that harness renewable and wasted energy at large scales and flexible building loads that coordinate with variable renewable energy supply. Unfortunately, stakeholders face a lack of documented processes, tools, and best practices to assistmore » them in achieving zero energy districts. The National Renewable Energy Laboratory (NREL) is partnering on two new district projects in Denver: the National Western Center and the Sun Valley Neighborhood. We are working closely with project stakeholders in their zero energy master planning efforts to develop the resources needed to resolve barriers and create replicable processes to support future zero energy district efforts across the United States. Initial results of these efforts include the identification and description of key zero energy district design principles (maximizing building efficiency, solar potential, renewable thermal energy, and load control), economic drivers, and master planning principles. The work has also resulted in NREL making initial enhancements to the U.S. Department of Energy's open source building energy modeling platform (OpenStudio and EnergyPlus) with the long-term goal of supporting the design and optimization of energy districts.« less

  20. Photon-assisted tunneling through a topological superconductor with Majorana bound states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Han-Zhao; Zhang, Ying-Tao, E-mail: zhangyt@mail.hebtu.edu.cn; Liu, Jian-Jun, E-mail: liujj@mail.hebtu.edu.cn

    Employing the Keldysh Nonequilibrium Green’s function method, we investigate time-dependent transport through a topological superconductor with Majorana bound states in the presence of a high frequency microwave field. It is found that Majorana bound states driven by photon-assisted tunneling can absorb(emit) photons and the resulting photon-assisted tunneling side band peaks can split the Majorana bound state that then appears at non-zero bias. This splitting breaks from the current opinion that Majorana bound states appear only at zero bias and thus provides a new experimental method for detecting Majorana bound states in the Non-zero-energy mode. We not only demonstrate that themore » photon-assisted tunneling side band peaks are due to Non-zero-energy Majorana bound states, but also that the height of the photon-assisted tunneling side band peaks is related to the intensity of the microwave field. It is further shown that the time-varying conductance induced by the Majorana bound states shows negative values for a certain period of time, which corresponds to a manifestation of the phase coherent time-varying behavior in mesoscopic systems.« less

  1. Modeling willingness to pay for land conservation easements: treatment of zero and protest bids and application and policy implications

    Treesearch

    Seong-Hoon Cho; Steven T. Yen; J. Michael Bowker; David H. Newman

    2008-01-01

    This study compares an ordered probit model and a Tobit model with selection to take into account both true zero and protest zero bids while estimating the willingness to pay (WTP) for conservation easements in Macon County, NC. By comparing the two models, the ordered/Unordered selection issue of the protest responses is analyzed to demonstrate how the treatment of...

  2. Power selective optical filter devices and optical systems using same

    DOEpatents

    Koplow, Jeffrey P

    2014-10-07

    In an embodiment, a power selective optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes at least one substantially zero-order, zero-wave plate. The zero-order, zero-wave plate is configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. The zero-order, zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.

  3. Strongly Correlated Metal Built from Sachdev-Ye-Kitaev Models

    NASA Astrophysics Data System (ADS)

    Song, Xue-Yang; Jian, Chao-Ming; Balents, Leon

    2017-11-01

    Prominent systems like the high-Tc cuprates and heavy fermions display intriguing features going beyond the quasiparticle description. The Sachdev-Ye-Kitaev (SYK) model describes a (0 +1 )D quantum cluster with random all-to-all four-fermion interactions among N fermion modes which becomes exactly solvable as N →∞ , exhibiting a zero-dimensional non-Fermi-liquid with emergent conformal symmetry and complete absence of quasiparticles. Here we study a lattice of complex-fermion SYK dots with random intersite quadratic hopping. Combining the imaginary time path integral with real time path integral formulation, we obtain a heavy Fermi liquid to incoherent metal crossover in full detail, including thermodynamics, low temperature Landau quasiparticle interactions, and both electrical and thermal conductivity at all scales. We find linear in temperature resistivity in the incoherent regime, and a Lorentz ratio L ≡(κ ρ /T ) varies between two universal values as a function of temperature. Our work exemplifies an analytically controlled study of a strongly correlated metal.

  4. Spherical Deconvolution of Multichannel Diffusion MRI Data with Non-Gaussian Noise Models and Spatial Regularization

    PubMed Central

    Canales-Rodríguez, Erick J.; Caruyer, Emmanuel; Aja-Fernández, Santiago; Radua, Joaquim; Yurramendi Mendizabal, Jesús M.; Iturria-Medina, Yasser; Melie-García, Lester; Alemán-Gómez, Yasser; Thiran, Jean-Philippe; Sarró, Salvador; Pomarol-Clotet, Edith; Salvador, Raymond

    2015-01-01

    Spherical deconvolution (SD) methods are widely used to estimate the intra-voxel white-matter fiber orientations from diffusion MRI data. However, while some of these methods assume a zero-mean Gaussian distribution for the underlying noise, its real distribution is known to be non-Gaussian and to depend on many factors such as the number of coils and the methodology used to combine multichannel MRI signals. Indeed, the two prevailing methods for multichannel signal combination lead to noise patterns better described by Rician and noncentral Chi distributions. Here we develop a Robust and Unbiased Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to Rician and noncentral Chi likelihood models. To quantify the benefits of using proper noise models, RUMBA-SD was compared with dRL-SD, a well-established method based on the RL algorithm for Gaussian noise. Another aim of the study was to quantify the impact of including a total variation (TV) spatial regularization term in the estimation framework. To do this, we developed TV spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The evaluation was performed by comparing various quality metrics on 132 three-dimensional synthetic phantoms involving different inter-fiber angles and volume fractions, which were contaminated with noise mimicking patterns generated by data processing in multichannel scanners. The results demonstrate that the inclusion of proper likelihood models leads to an increased ability to resolve fiber crossings with smaller inter-fiber angles and to better detect non-dominant fibers. The inclusion of TV regularization dramatically improved the resolution power of both techniques. The above findings were also verified in human brain data. PMID:26470024

  5. Implications of Modern Non-Equilibrium Thermodynamics for Georgescu-Roegen's Macro-Economics: lessons from a comprehensive historical review

    NASA Astrophysics Data System (ADS)

    Poisson, Alexandre

    2011-12-01

    In the early 1970s, mathematician and economist Nicolas Georgescu-Roegen developed an alternative framework to macro-economics (his hourglass model) based on two principles of classical thermodynamics applied to the earth-system as a whole. The new model led him to the radical conclusion that "not only growth, but also a zero-growth state, nay, even a declining state which does not converge toward annihilation, cannot exist forever in a finite environment" (Georgescu-Roegen 1976, p.23). Georgescu-Roegen's novel approach long served as a devastating critique of standard neoclassical growth theories. It also helped establish the foundations for the new trans-disciplinary field of ecological economics. In recent decades however, it has remained unclear whether revolutionary developments in "modern non-equilibrium thermodynamics" (Kondepudi and Prigogine 1998) refute some of Georgescu-Roegen's initial conclusions and provide fundamentally new lessons for very long-term macro-economic analysis. Based on a broad historical review of literature from many fields (thermodynamics, cosmology, ecosystems ecology and economics), I argue that Georgescu-Roegen's hourglass model is largely based on old misconceptions and assumptions from 19th century thermodynamics (including an out-dated cosmology) which make it very misleading. Ironically, these assumptions (path independence and linearity of the entropy function in particular) replicate the non-evolutionary thinking he seemed to despise in his colleagues. In light of modern NET, I propose a different model. Contrary to Georgescu-Roegen's hourglass, I do not assume the path independence of the entropy function. In the new model, achieving critical free energy rate density thresholds can abruptly increase the level of complexity and maximum remaining lifespan of stock-based civilizations.

  6. Neural networks: further insights into error function, generalized weights and others

    PubMed Central

    2016-01-01

    The article is a continuum of a previous one providing further insights into the structure of neural network (NN). Key concepts of NN including activation function, error function, learning rate and generalized weights are introduced. NN topology can be visualized with generic plot() function by passing a “nn” class object. Generalized weights assist interpretation of NN model with respect to the independent effect of individual input variables. A large variance of generalized weights for a covariate indicates non-linearity of its independent effect. If generalized weights of a covariate are approximately zero, the covariate is considered to have no effect on outcome. Finally, prediction of new observations can be performed using compute() function. Make sure that the feature variables passed to the compute() function are in the same order to that in the training NN. PMID:27668220

  7. An Improved Instrument for Investigating Planetary Regolith Microstructure

    NASA Technical Reports Server (NTRS)

    Nelson, R. M.; Hapke, B. W.; Smythe, W. D.; Manatt, K. S.; Eddy, J.

    2005-01-01

    The Opposition Effect (OE) is the non-linear increase in the intensity of light scattered from a surface as phase angle approaches 0 deg. It is seen in laboratory experiments and in remote sensing observations of planetary surfaces. Understanding the OE is a requirement for fitting photometric models which produce meaningful results about regolith texture. Previously we have reported measurements from the JPL long arm goniometer and we have shown that this instrument enables us to distinguish between two distinct processes which create the opposition surges, Shadow Hiding (SHOE) and Coherent Backscattering (CBOE). SHOE arises because, as phase angle approaches zero, shadows cast by regolith grains on other grains become invisible to the observer. CBOE results from constructive interference between rays traveling the same path but in opposite directions. Additional information is included in the original extended abstract.

  8. High-performance computing on GPUs for resistivity logging of oil and gas wells

    NASA Astrophysics Data System (ADS)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  9. Characteristics of melting heat transfer during flow of Carreau fluid induced by a stretching cylinder.

    PubMed

    Hashim; Khan, Masood; Saleh Alshomrani, Ali

    2017-01-01

    This article provides a comprehensive analysis of the energy transportation by virtue of the melting process of high-temperature phase change materials. We have developed a two-dimensional model for the boundary layer flow of non-Newtonian Carreau fluid. It is assumed that flow is caused by stretching of a cylinder in the axial direction by means of a linear velocity. Adequate local similarity transformations are employed to determine a set of non-linear ordinary differential equations which govern the flow problem. Numerical solutions to the resultant non-dimensional boundary value problem are computed via the fifth-order Runge-Kutta Fehlberg integration scheme. The solutions are captured for both zero and non-zero curvature parameters, i.e., for flow over a flat plate or flow over a cylinder. The flow and heat transfer attributes are witnessed to be prompted in an intricate manner by the melting parameter, the curvature parameter, the Weissenberg number, the power law index and the Prandtl number. We determined that one of the possible ways to boost the fluid velocity is to increase the melting parameter. Additionally, both the velocity of the fluid and the momentum boundary layer thickness are higher in the case of flow over a stretching cylinder. As expected, the magnitude of the skin friction and the rate of heat transfer decrease by raising the values of the melting parameter and the Weissenberg number.

  10. Tunneling conductance in semiconductor-superconductor hybrid structures

    NASA Astrophysics Data System (ADS)

    Stenger, John; Stanescu, Tudor D.

    2017-12-01

    We study the differential conductance for charge tunneling into a semiconductor wire-superconductor hybrid structure, which is actively investigated as a possible scheme for realizing topological superconductivity and Majorana zero modes. The calculations are done based on a tight-binding model of the heterostructure using both a Blonder-Tinkham-Klapwijk approach and a Keldysh nonequilibrium Green's function method. The dependence of various tunneling conductance features on the coupling strength between the semiconductor and the superconductor, the tunnel barrier height, and temperature is systematically investigated. We find that treating the parent superconductor as an active component of the system, rather than a passive source of Cooper pairs, has qualitative consequences regarding the low-energy behavior of the differential conductance. In particular, the presence of subgap states in the parent superconductor, due to disorder and finite magnetic fields, leads to characteristic particle-hole asymmetric features and to the breakdown of the quantization of the zero-bias peak associated with the presence of Majorana zero modes localized at the ends of the wire. The implications of these findings for the effort toward the realization of Majorana bound states with true non-Abelian properties are discussed.

  11. Macroscopic Fluctuation Theory for Stationary Non-Equilibrium States

    NASA Astrophysics Data System (ADS)

    Bertini, L.; de Sole, A.; Gabrielli, D.; Jona-Lasinio, G.; Landim, C.

    2002-05-01

    We formulate a dynamical fluctuation theory for stationary non-equilibrium states (SNS) which is tested explicitly in stochastic models of interacting particles. In our theory a crucial role is played by the time reversed dynamics. Within this theory we derive the following results: the modification of the Onsager-Machlup theory in the SNS; a general Hamilton-Jacobi equation for the macroscopic entropy; a non-equilibrium, nonlinear fluctuation dissipation relation valid for a wide class of systems; an H theorem for the entropy. We discuss in detail two models of stochastic boundary driven lattice gases: the zero range and the simple exclusion processes. In the first model the invariant measure is explicitly known and we verify the predictions of the general theory. For the one dimensional simple exclusion process, as recently shown by Derrida, Lebowitz, and Speer, it is possible to express the macroscopic entropy in terms of the solution of a nonlinear ordinary differential equation; by using the Hamilton-Jacobi equation, we obtain a logically independent derivation of this result.

  12. Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies

    NASA Astrophysics Data System (ADS)

    Williams, Paul; Howe, Nicola; Gregory, Jonathan; Smith, Robin; Joshi, Manoj

    2016-04-01

    In climate simulations, the impacts of the sub-grid scales on the resolved scales are conventionally represented using deterministic closure schemes, which assume that the impacts are uniquely determined by the resolved scales. Stochastic parameterization relaxes this assumption, by sampling the sub-grid variability in a computationally inexpensive manner. This presentation shows that the simulated climatological state of the ocean is improved in many respects by implementing a simple stochastic parameterization of ocean eddies into a coupled atmosphere-ocean general circulation model. Simulations from a high-resolution, eddy-permitting ocean model are used to calculate the eddy statistics needed to inject realistic stochastic noise into a low-resolution, non-eddy-permitting version of the same model. A suite of four stochastic experiments is then run to test the sensitivity of the simulated climate to the noise definition, by varying the noise amplitude and decorrelation time within reasonable limits. The addition of zero-mean noise to the ocean temperature tendency is found to have a non-zero effect on the mean climate. Specifically, in terms of the ocean temperature and salinity fields both at the surface and at depth, the noise reduces many of the biases in the low-resolution model and causes it to more closely resemble the high-resolution model. The variability of the strength of the global ocean thermohaline circulation is also improved. It is concluded that stochastic ocean perturbations can yield reductions in climate model error that are comparable to those obtained by refining the resolution, but without the increased computational cost. Therefore, stochastic parameterizations of ocean eddies have the potential to significantly improve climate simulations. Reference PD Williams, NJ Howe, JM Gregory, RS Smith, and MM Joshi (2016) Improved Climate Simulations through a Stochastic Parameterization of Ocean Eddies. Journal of Climate, under revision.

  13. A new perspective on Quantum Finance using the Black-Scholes pricing model

    NASA Astrophysics Data System (ADS)

    Dieng, Lamine

    2007-03-01

    Options are known to be divided into two types, the first type is called a call option and the second type is called a put option and these options are offered to stock holders in order to hedge their positions against risky fluctuations of the stock price. It is important to mention that due to fluctuations of the stock price, options can be found sometimes deep in the money, at the money and out of the money. A deep in the money option is described when the option's holder has a positive expected payoff, at the money option is when the option's holder has a zero expected payoff and an out of the money option is when the payoff is negative. In this work, we will assume the stock price to be described by the well known Black-Scholes model or sometimes called the multiplicative model. Using Ito calculus, Martingale and supermartingale theories, we investigated the Black-Scholes pricing equation at the money (X(stock price)= K (strike price)) when the expected payoff of the options holder is zero. We also hedged the Black-Scholes pricing equation in the limit when delta is zero to obtain the non-relativistic time independent Schroedinger equation in quantum mechanics. We compared the two equations and found the diffusion constant to be a function of the stock price in contrast to the Bachelier model we have worked on earlier. We solved the Schroedinger equation and found a dependence between interest rate, volatility and strike price at the money.

  14. Effective group index of refraction in non-thermal plasma photonic crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mousavi, A.; Sadegzadeh, S., E-mail: sadegzadeh@azaruniv.edu

    Plasma photonic crystals (PPCs) are periodic arrays that consist of alternate layers of micro-plasma and dielectric. These structures are used to control the propagation of electromagnetic waves. This paper presents a survey of research on the effect of non-thermal plasma with bi-Maxwellian distribution function on one dimensional PPC. A plasma with temperature anisotropy is not in thermodynamic equilibrium and can be described by the bi-Maxwellian distribution function. By using Kronig-Penny's model, the dispersion relation of electromagnetic modes in one dimensional non-thermal PPC (NPPC) is derived. The band structure, group velocity v{sub g}, and effective group index of refraction n{sub eff}(g)more » of such NPPC structure with TeO{sub 2} as the material of dielectric layers have been studied. The concept of negative group velocity and negative n{sub eff}(g), which indicates an anomalous behaviour of the PPCs, are also observed in the NPPC structures. Our numerical results provide confirmatory evidence that unlike PPCs there are finite group velocity and non-zero effective group indexes of refraction in photonic band gaps (PBGs) that lie in certain ranges of normalized frequency. In other words, inside the PBGs of NPPCs, n{sub eff}(g) becomes non-zero and photons travel with a finite group velocity. In this special case, this velocity varies alternately between 20c and negative values of the order 10{sup 3}c (c is the speed of light in vacuum)« less

  15. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    NASA Astrophysics Data System (ADS)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  16. Zero-Inertial Recession for a Kinematic Wave Model

    USDA-ARS?s Scientific Manuscript database

    Kinematic-wave models of surface irrigation assume a fixed relationship between depth and discharge (typically, normal depth). When surface irrigation inflow is cut off, the calculated upstream flow depth goes to zero, since the discharge is zero. For short time steps, use of the Kinematic Wave mode...

  17. [Child abuse in Tlaxcala: a case-control study].

    PubMed

    Herrada-Huidobro, A; Nazar-Beutelspacher, A; Cassaball-Núñez, M; Vega-Ramos, R; Nava-Cruz, C B

    1992-01-01

    A longitudinal, retrospective and descriptive study about child abuse was carried out in the Hospitals of the Tlaxcala Secretariat of Health, Mexico. The information was obtained from hospitalized children's charts between January first and November 30, 1991. The charts included were those belonging to zero to 14 year old children with injuries, poisoning, and II-III degrees of malnutrition. Four child-abuse criteria were established: physical, sexual, non organic malnutrition and mixed (physical and non organic malnutrition). Two control groups were defined. Different patterns were observed between accidental and non accidental injuries, malnutrition and poisoning among the case and the control groups. The study provides useful information for the integral diagnosis of child abuse in hospitalized children.

  18. GSFC specification electronic data processing magnetic recording tape

    NASA Technical Reports Server (NTRS)

    Tinari, D. F.; Perry, J. L.

    1980-01-01

    The design requirements are given for magnetic oxide coated, electronic data processing tape, wound on reels. Magnetic recording tape types covered by this specification are intended for use on digital tape transports using the Non-Return-to-Zero-change-on-ones (NRZI) recording method for recording densities up to and including 800 characters per inch (cpi) and the Phase-Encoding (PE) recording method for a recording density of 1600 cpi.

  19. Sum rules for zeros and intersections of Bessel functions from quantum mechanical perturbation theory

    NASA Astrophysics Data System (ADS)

    Pedersen, Thomas Garm

    2018-07-01

    Bessel functions play an important role for quantum states in spherical and cylindrical geometries. In cases of perfect confinement, the energy of Schrödinger and massless Dirac fermions is determined by the zeros and intersections of Bessel functions, respectively. In an external electric field, standard perturbation theory therefore expresses the polarizability as a sum over these zeros or intersections. Both non-relativistic and relativistic polarizabilities can be calculated analytically, however. Hence, by equating analytical expressions to perturbation expansions, several sum rules for the zeros and intersections of Bessel functions emerge.

  20. The Standard Model: how far can it go and how can we tell?

    PubMed

    Butterworth, J M

    2016-08-28

    The Standard Model of particle physics encapsulates our current best understanding of physics at the smallest distances and highest energies. It incorporates quantum electrodynamics (the quantized version of Maxwell's electromagnetism) and the weak and strong interactions, and has survived unmodified for decades, save for the inclusion of non-zero neutrino masses after the observation of neutrino oscillations in the late 1990s. It describes a vast array of data over a wide range of energy scales. I review a selection of these successes, including the remarkably successful prediction of a new scalar boson, a qualitatively new kind of object observed in 2012 at the Large Hadron Collider. New calculational techniques and experimental advances challenge the Standard Model across an ever-wider range of phenomena, now extending significantly above the electroweak symmetry breaking scale. I will outline some of the consequences of these new challenges, and briefly discuss what is still to be found.This article is part of the themed issue 'Unifying physics and technology in light of Maxwell's equations'. © 2016 The Author(s).

Top