Sample records for fixed scale approach

  1. Floating-to-Fixed-Point Conversion for Digital Signal Processors

    NASA Astrophysics Data System (ADS)

    Menard, Daniel; Chillet, Daniel; Sentieys, Olivier

    2006-12-01

    Digital signal processing applications are specified with floating-point data types but they are usually implemented in embedded systems with fixed-point arithmetic to minimise cost and power consumption. Thus, methodologies which establish automatically the fixed-point specification are required to reduce the application time-to-market. In this paper, a new methodology for the floating-to-fixed point conversion is proposed for software implementations. The aim of our approach is to determine the fixed-point specification which minimises the code execution time for a given accuracy constraint. Compared to previous methodologies, our approach takes into account the DSP architecture to optimise the fixed-point formats and the floating-to-fixed-point conversion process is coupled with the code generation process. The fixed-point data types and the position of the scaling operations are optimised to reduce the code execution time. To evaluate the fixed-point computation accuracy, an analytical approach is used to reduce the optimisation time compared to the existing methods based on simulation. The methodology stages are described and several experiment results are presented to underline the efficiency of this approach.

  2. Degeneracy relations in QCD and the equivalence of two systematic all-orders methods for setting the renormalization scale

    DOE PAGES

    Bi, Huan -Yu; Wu, Xing -Gang; Ma, Yang; ...

    2015-06-26

    The Principle of Maximum Conformality (PMC) eliminates QCD renormalization scale-setting uncertainties using fundamental renormalization group methods. The resulting scale-fixed pQCD predictions are independent of the choice of renormalization scheme and show rapid convergence. The coefficients of the scale-fixed couplings are identical to the corresponding conformal series with zero β-function. Two all-orders methods for systematically implementing the PMC-scale setting procedure for existing high order calculations are discussed in this article. One implementation is based on the PMC-BLM correspondence (PMC-I); the other, more recent, method (PMC-II) uses the R δ-scheme, a systematic generalization of the minimal subtraction renormalization scheme. Both approaches satisfymore » all of the principles of the renormalization group and lead to scale-fixed and scheme-independent predictions at each finite order. In this work, we show that PMC-I and PMC-II scale-setting methods are in practice equivalent to each other. We illustrate this equivalence for the four-loop calculations of the annihilation ratio R e+e– and the Higgs partial width I'(H→bb¯). Both methods lead to the same resummed (‘conformal’) series up to all orders. The small scale differences between the two approaches are reduced as additional renormalization group {β i}-terms in the pQCD expansion are taken into account. In addition, we show that special degeneracy relations, which underly the equivalence of the two PMC approaches and the resulting conformal features of the pQCD series, are in fact general properties of non-Abelian gauge theory.« less

  3. How nonperturbative is the infrared regime of Landau gauge Yang-Mills correlators?

    NASA Astrophysics Data System (ADS)

    Reinosa, U.; Serreau, J.; Tissier, M.; Wschebor, N.

    2017-07-01

    We study the Landau gauge correlators of Yang-Mills fields for infrared Euclidean momenta in the context of a massive extension of the Faddeev-Popov Lagrangian which, we argue, underlies a variety of continuum approaches. Standard (perturbative) renormalization group techniques with a specific, infrared-safe renormalization scheme produce so-called decoupling and scaling solutions for the ghost and gluon propagators, which correspond to nontrivial infrared fixed points. The decoupling fixed point is infrared stable and weakly coupled, while the scaling fixed point is unstable and generically strongly coupled except for low dimensions d →2 . Under the assumption that such a scaling fixed point exists beyond one-loop order, we find that the corresponding ghost and gluon scaling exponents are, respectively, 2 αF=2 -d and 2 αG=d at all orders of perturbation theory in the present renormalization scheme. We discuss the relation between the ghost wave function renormalization, the gluon screening mass, the scale of spectral positivity violation, and the gluon mass parameter. We also show that this scaling solution does not realize the standard Becchi-Rouet-Stora-Tyutin symmetry of the Faddeev-Popov Lagrangian. Finally, we discuss our findings in relation to the results of nonperturbative continuum methods.

  4. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, S.; Jaiswal, P.; Li, Ye

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  5. Resummation of jet veto logarithms at N 3 LL a + NNLO for W + W ? production at the LHC

    DOE PAGES

    Dawson, S.; Jaiswal, P.; Li, Ye; ...

    2016-12-01

    We compute the resummed on-shell W+W- production cross section under a jet veto at the LHC to partial N3LL order matched to the fixed-order NNLO result. Differential NNLO cross sections are obtained from an implementation of qT subtraction in Sherpa. The two-loop virtual corrections to the qq¯→W+W- amplitude, used in both fixed-order and resummation predictions, are extracted from the public code qqvvamp. We perform resummation using soft collinear effective theory, with approximate beam functions where only the logarithmic terms are included at two-loop. In addition to scale uncertainties from the hard matching scale and the factorization scale, rapidity scale variationsmore » are obtained within the analytic regulator approach. Our resummation results show a decrease in the jet veto cross section compared to NNLO fixed-order predictions, with reduced scale uncertainties compared to NNLL+NLO resummed predictions. We include the loop-induced gg contribution with jet veto resummation to NLL+LO. The prediction shows good agreement with recent LHC measurements.« less

  6. Co-C and Pd-C Fixed Points for the Evaluation of Facilities and Scales Realization at INRIM and NMC

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Wang, L.; Girard, F.; Ang, S. H.

    2014-04-01

    Two hybrid cells for realizing the Co-C and Pd-C fixed points and constructed at Istituto Nazionale di Ricerca Metrologica (INRIM) were used for an evaluation of facilities and procedures adopted by INRIM and National Metrology Institute of Singapore (NMC) for the realization of the solid-liquid phase transitions of high-temperature fixed points and for determining their transition temperatures. Four different furnaces were used for the investigations, i.e., two single-zone furnaces, one of them of the direct-heating type, and two identical three-zone furnaces. The transition temperatures were measured at both institutes by adopting different procedures for realizing the radiation scales, i.e., at INRIM a scheme based on the extrapolation of fixed-point interpolated scales and an International Temperature Scale of 1990 (ITS-90) approach at NMC. The point of inflection (POI) of the melting curves was determined and assumed as a practical representation of the melting temperature. Different methods for deriving the POI were used, and differences as large as some hundredths of a kelvin were found with the different approaches. The POIs of the different melting curves were analyzed with respect to the different possible operative conditions with the aim of deriving reproducibility figures to improve the estimated uncertainty. As regard to the institutes inter-comparison, differences of 0.13 K and 0.29 K were found between INRIM and NMC determinations at the Co-C and Pd-C points, respectively. Such differences are compatible with the combined standard uncertainties of the comparison, which are estimated to be 0.33 K and 0.36 K at the Co-C and Pd-C points, respectively.

  7. Ecogenomic sensor reveals controls on N2-fixing microorganisms in the North Pacific Ocean.

    PubMed

    Robidart, Julie C; Church, Matthew J; Ryan, John P; Ascani, François; Wilson, Samuel T; Bombar, Deniz; Marin, Roman; Richards, Kelvin J; Karl, David M; Scholin, Christopher A; Zehr, Jonathan P

    2014-06-01

    Nitrogen-fixing microorganisms (diazotrophs) are keystone species that reduce atmospheric dinitrogen (N2) gas to fixed nitrogen (N), thereby accounting for much of N-based new production annually in the oligotrophic North Pacific. However, current approaches to study N2 fixation provide relatively limited spatiotemporal sampling resolution; hence, little is known about the ecological controls on these microorganisms or the scales over which they change. In the present study, we used a drifting robotic gene sensor to obtain high-resolution data on the distributions and abundances of N2-fixing populations over small spatiotemporal scales. The resulting measurements demonstrate that concentrations of N2 fixers can be highly variable, changing in abundance by nearly three orders of magnitude in less than 2 days and 30 km. Concurrent shipboard measurements and long-term time-series sampling uncovered a striking and previously unrecognized correlation between phosphate, which is undergoing long-term change in the region, and N2-fixing cyanobacterial abundances. These results underscore the value of high-resolution sampling and its applications for modeling the effects of global change.

  8. Network rewiring dynamics with convergence towards a star network

    PubMed Central

    Dick, G.; Parry, M.

    2016-01-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz (Nature 393, 440–442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach. PMID:27843396

  9. Network rewiring dynamics with convergence towards a star network.

    PubMed

    Whigham, P A; Dick, G; Parry, M

    2016-10-01

    Network rewiring as a method for producing a range of structures was first introduced in 1998 by Watts & Strogatz ( Nature 393 , 440-442. (doi:10.1038/30918)). This approach allowed a transition from regular through small-world to a random network. The subsequent interest in scale-free networks motivated a number of methods for developing rewiring approaches that converged to scale-free networks. This paper presents a rewiring algorithm (RtoS) for undirected, non-degenerate, fixed size networks that transitions from regular, through small-world and scale-free to star-like networks. Applications of the approach to models for the spread of infectious disease and fixation time for a simple genetics model are used to demonstrate the efficacy and application of the approach.

  10. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    PubMed

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  11. An improved maximum power point tracking method for a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Ouoba, David; Fakkar, Abderrahim; El Kouari, Youssef; Dkhichi, Fayrouz; Oukarfi, Benyounes

    2016-06-01

    In this paper, an improved auto-scaling variable step-size Maximum Power Point Tracking (MPPT) method for photovoltaic (PV) system was proposed. To achieve simultaneously a fast dynamic response and stable steady-state power, a first improvement was made on the step-size scaling function of the duty cycle that controls the converter. An algorithm was secondly proposed to address wrong decision that may be made at an abrupt change of the irradiation. The proposed auto-scaling variable step-size approach was compared to some various other approaches from the literature such as: classical fixed step-size, variable step-size and a recent auto-scaling variable step-size maximum power point tracking approaches. The simulation results obtained by MATLAB/SIMULINK were given and discussed for validation.

  12. A fuzzy-logic-based controller for methane production in anaerobic fixed-film reactors.

    PubMed

    Robles, A; Latrille, E; Ruano, M V; Steyer, J P

    2017-01-01

    The main objective of this work was to develop a controller for biogas production in continuous anaerobic fixed-bed reactors, which used effluent total volatile fatty acids (VFA) concentration as control input in order to prevent process acidification at closed loop. To this aim, a fuzzy-logic-based control system was developed, tuned and validated in an anaerobic fixed-bed reactor at pilot scale that treated industrial winery wastewater. The proposed controller varied the flow rate of wastewater entering the system as a function of the gaseous outflow rate of methane and VFA concentration. Simulation results show that the proposed controller is capable to achieve great process stability even when operating at high VFA concentrations. Pilot results showed the potential of this control approach to maintain the process working properly under similar conditions to the ones expected at full-scale plants.

  13. Semihierarchical quantum repeaters based on moderate lifetime quantum memories

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Zhou, Zong-Quan; Hua, Yi-Lin; Li, Chuan-Feng; Guo, Guang-Can

    2017-01-01

    The construction of large-scale quantum networks relies on the development of practical quantum repeaters. Many approaches have been proposed with the goal of outperforming the direct transmission of photons, but most of them are inefficient or difficult to implement with current technology. Here, we present a protocol that uses a semihierarchical structure to improve the entanglement distribution rate while reducing the requirement of memory time to a range of tens of milliseconds. This protocol can be implemented with a fixed distance of elementary links and fixed requirements on quantum memories, which are independent of the total distance. This configuration is especially suitable for scalable applications in large-scale quantum networks.

  14. Turbulent compressible fluid: Renormalization group analysis, scaling regimes, and anomalous scaling of advected scalar fields

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.; Kostenko, M. M.; Lučivjanský, T.

    2017-03-01

    We study a model of fully developed turbulence of a compressible fluid, based on the stochastic Navier-Stokes equation, by means of the field-theoretic renormalization group. In this approach, scaling properties are related to the fixed points of the renormalization group equations. Previous analysis of this model near the real-world space dimension 3 identified a scaling regime [N. V. Antonov et al., Theor. Math. Phys. 110, 305 (1997), 10.1007/BF02630456]. The aim of the present paper is to explore the existence of additional regimes, which could not be found using the direct perturbative approach of the previous work, and to analyze the crossover between different regimes. It seems possible to determine them near the special value of space dimension 4 in the framework of double y and ɛ expansion, where y is the exponent associated with the random force and ɛ =4 -d is the deviation from the space dimension 4. Our calculations show that there exists an additional fixed point that governs scaling behavior. Turbulent advection of a passive scalar (density) field by this velocity ensemble is considered as well. We demonstrate that various correlation functions of the scalar field exhibit anomalous scaling behavior in the inertial-convective range. The corresponding anomalous exponents, identified as scaling dimensions of certain composite fields, can be systematically calculated as a series in y and ɛ . All calculations are performed in the leading one-loop approximation.

  15. Recombinant Human Factor IX Produced from Transgenic Porcine Milk

    PubMed Central

    Lee, Meng-Hwan; Lin, Yin-Shen; Tu, Ching-Fu; Yen, Chon-Ho

    2014-01-01

    Production of biopharmaceuticals from transgenic animal milk is a cost-effective method for highly complex proteins that cannot be efficiently produced using conventional systems such as microorganisms or animal cells. Yields of recombinant human factor IX (rhFIX) produced from transgenic porcine milk under the control of the bovine α-lactalbumin promoter reached 0.25 mg/mL. The rhFIX protein was purified from transgenic porcine milk using a three-column purification scheme after a precipitation step to remove casein. The purified protein had high specific activity and a low ratio of the active form (FIXa). The purified rhFIX had 11.9 γ-carboxyglutamic acid (Gla) residues/mol protein, which approached full occupancy of the 12 potential sites in the Gla domain. The rhFIX was shown to have a higher isoelectric point and lower sialic acid content than plasma-derived FIX (pdFIX). The rhFIX had the same N-glycosylation sites and phosphorylation sites as pdFIX, but had a higher specific activity. These results suggest that rhFIX produced from porcine milk is physiologically active and they support the use of transgenic animals as bioreactors for industrial scale production in milk. PMID:24955355

  16. Percolation in random-Sierpiński carpets: A real space renormalization group approach

    NASA Astrophysics Data System (ADS)

    Perreau, Michel; Peiro, Joaquina; Berthier, Serge

    1996-11-01

    The site percolation transition in random Sierpiński carpets is investigated by real space renormalization. The fixed point is not unique like in regular translationally invariant lattices, but depends on the number k of segmentation steps of the generation process of the fractal. It is shown that, for each scale invariance ratio n, the sequence of fixed points pn,k is increasing with k, and converges when k-->∞ toward a limit pn strictly less than 1. Moreover, in such scale invariant structures, the percolation threshold does not depend only on the scale invariance ratio n, but also on the scale. The sequence pn,k and pn are calculated for n=4, 8, 16, 32, and 64, and for k=1 to k=11, and k=∞. The corresponding thermal exponent sequence νn,k is calculated for n=8 and 16, and for k=1 to k=5, and k=∞. Suggestions are made for an experimental test in physical self-similar structures.

  17. Thermodynamic modeling of small scale biomass gasifiers: Development and assessment of the ''Multi-Box'' approach.

    PubMed

    Vakalis, Stergios; Patuzzi, Francesco; Baratieri, Marco

    2016-04-01

    Modeling can be a powerful tool for designing and optimizing gasification systems. Modeling applications for small scale/fixed bed biomass gasifiers have been interesting due to their increased commercial practices. Fixed bed gasifiers are characterized by a wide range of operational conditions and are multi-zoned processes. The reactants are distributed in different phases and the products from each zone influence the following process steps and thus the composition of the final products. The present study aims to improve the conventional 'Black-Box' thermodynamic modeling by means of developing multiple intermediate 'boxes' that calculate two phase (solid-vapor) equilibriums in small scale gasifiers. Therefore the model is named ''Multi-Box''. Experimental data from a small scale gasifier have been used for the validation of the model. The returned results are significantly closer with the actual case study measurements in comparison to single-stage thermodynamic modeling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. The effects of biome and spatial scale on the Co-occurrence patterns of a group of Namibian beetles

    NASA Astrophysics Data System (ADS)

    Pitzalis, Monica; Montalto, Francesca; Amore, Valentina; Luiselli, Luca; Bologna, Marco A.

    2017-08-01

    Co-occurrence patterns (studied by C-score, number of checkerboard units, number of species combinations, and V-ratio, and by an empirical Bayes approach developed by Gotelli and Ulrich, 2010) are crucial elements in order to understand assembly rules in ecological communities at both local and spatial scales. In order to explore general assembly rules and the effects of biome and spatial scale on such rules, here we studied a group of beetles (Coleoptera, Meloidae), using Namibia as a case of study. Data were gathered from 186 sampling sites, which allowed collection of 74 different species. We analyzed data at the level of (i) all sampled sites, (ii) all sites stratified by biome (Savannah, Succulent Karoo, Nama Karoo, Desert), and (iii) three randomly selected nested areas with three spatial scales each. Three competing algorithms were used for all analyses: (i) Fixed-Equiprobable, (ii) Fixed-Fixed, and (iii) Fixed-Proportional. In most of the null models we created, co-occurrence indicators revealed a non-random structure in meloid beetle assemblages at the global scale and at the scale of biomes, with species aggregation being much more important than species segregation in determining this non-randomness. At the level of biome, the same non-random organization was uncovered in assemblages from Savannah (where the aggregation pattern was particularly strong) and Succulent Karoo, but not in Desert and Nama Karoo. We conclude that species facilitation and similar niche in endemic species pairs may be particularly important as community drivers in our case of study. This pattern is also consistent with the evidence of a higher species diversity (normalized according to biome surface area) in the two former biomes. Historical patterns were perhaps also important for Succulent Karoo assemblages. Spatial scale had a reduced effect on patterning our data. This is consistent with the general homogeneity of environmental conditions over wide areas in Namibia.

  19. Quark–gluon plasma phenomenology from anisotropic lattice QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skullerud, Jon-Ivar; Kelly, Aoife; Aarts, Gert

    The FASTSUM collaboration has been carrying out simulations of N{sub f} = 2 + 1 QCD at nonzero temperature in the fixed-scale approach using anisotropic lattices. Here we present the status of these studies, including recent results for electrical conductivity and charge diffusion, and heavy quarkonium (charm and beauty) physics.

  20. Holographic entanglement chemistry

    NASA Astrophysics Data System (ADS)

    Caceres, Elena; Nguyen, Phuc H.; Pedraza, Juan F.

    2017-05-01

    We use the Iyer-Wald formalism to derive an extended first law of entanglement that includes variations in the cosmological constant, Newton's constant and—in the case of higher-derivative theories—all the additional couplings of the theory. In Einstein gravity, where the number of degrees of freedom N2 of the dual field theory is a function of Λ and G , our approach allows us to vary N by keeping the field theory scale fixed or to vary the field theory scale by keeping N fixed. We also derive an extended first law of entanglement for Gauss-Bonnet and Lovelock gravity and show that in these cases all the extra variations reorganize nicely in terms of the central charges of the theory. Finally, we comment on the implications for renormalization group flows and c -theorems in higher dimensions.

  1. Functional Genomics Approaches to Studying Symbioses between Legumes and Nitrogen-Fixing Rhizobia.

    PubMed

    Lardi, Martina; Pessi, Gabriella

    2018-05-18

    Biological nitrogen fixation gives legumes a pronounced growth advantage in nitrogen-deprived soils and is of considerable ecological and economic interest. In exchange for reduced atmospheric nitrogen, typically given to the plant in the form of amides or ureides, the legume provides nitrogen-fixing rhizobia with nutrients and highly specialised root structures called nodules. To elucidate the molecular basis underlying physiological adaptations on a genome-wide scale, functional genomics approaches, such as transcriptomics, proteomics, and metabolomics, have been used. This review presents an overview of the different functional genomics approaches that have been performed on rhizobial symbiosis, with a focus on studies investigating the molecular mechanisms used by the bacterial partner to interact with the legume. While rhizobia belonging to the alpha-proteobacterial group (alpha-rhizobia) have been well studied, few studies to date have investigated this process in beta-proteobacteria (beta-rhizobia).

  2. Constant- q data representation in Neutron Compton scattering on the VESUVIO spectrometer

    NASA Astrophysics Data System (ADS)

    Senesi, R.; Pietropaolo, A.; Andreani, C.

    2008-09-01

    Standard data analysis on the VESUVIO spectrometer at ISIS is carried out within the Impulse Approximation framework, making use of the West scaling variable y. The experiments are performed using the time-of-flight technique with the detectors positioned at constant scattering angles. Line shape analysis is routinely performed in the y-scaling framework, using two different (and equivalent) approaches: (1) fitting the parameters of the recoil peaks directly to fixed-angle time-of-flight spectra; (2) transforming the time-of-flight spectra into fixed-angle y spectra, referred to as the Neutron Compton Profiles, and then fitting the line shape parameters. The present work shows that scattering signals from different fixed-angle detectors can be collected and rebinned to obtain Neutron Compton Profiles at constant wave vector transfer, q, allowing for a suitable interpretation of data in terms of the dynamical structure factor, S(q,ω). The current limits of applicability of such a procedure are discussed in terms of the available q-range and relative uncertainties for the VESUVIO experimental set up and of the main approximations involved.

  3. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    PubMed

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Exact results for the O( N ) model with quenched disorder

    NASA Astrophysics Data System (ADS)

    Delfino, Gesualdo; Lamsen, Noel

    2018-04-01

    We use scale invariant scattering theory to exactly determine the lines of renormalization group fixed points for O( N )-symmetric models with quenched disorder in two dimensions. Random fixed points are characterized by two disorder parameters: a modulus that vanishes when approaching the pure case, and a phase angle. The critical lines fall into three classes depending on the values of the disorder modulus. Besides the class corresponding to the pure case, a second class has maximal value of the disorder modulus and includes Nishimori-like multicritical points as well as zero temperature fixed points. The third class contains critical lines that interpolate, as N varies, between the first two classes. For positive N , it contains a single line of infrared fixed points spanning the values of N from √{2}-1 to 1. The symmetry sector of the energy density operator is superuniversal (i.e. N -independent) along this line. For N = 2 a line of fixed points exists only in the pure case, but accounts also for the Berezinskii-Kosterlitz-Thouless phase observed in presence of disorder.

  5. Extrapolation of radiation thermometry scales for determining the transition temperature of metal-carbon points. Experiments with the Co-C

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Girard, F.; Florio, M.

    2009-02-01

    Four independent radiation temperature scales approximating the ITS-90 at 900 nm, 950 nm and 1.6 µm have been realized from the indium point (429.7485 K) to the copper point (1357.77 K) which were used to derive by extrapolation the transition temperature T90(Co-C) of the cobalt-carbon eutectic fixed point. An INRIM cell was investigated and an average value T90(Co-C) = 1597.20 K was found with the four values lying within 0.25 K. Alternatively, thermodynamic approximated scales were realized by assigning to the fixed points the best presently available thermodynamic values and deriving T(Co-C). An average value of 1597.27 K was found (four values lying within 0.25 K). The standard uncertainties associated with T90(Co-C) and T(Co-C) were 0.16 K and 0.17 K, respectively. INRIM determinations are compatible with recent thermodynamic determinations on three different cells (values lying between 1597.11 K and 1597.25 K) and with the result of a comparison on the same cell by an absolute radiation thermometer and an irradiance measurement with filter radiometers which give values of 1597.11 K and 1597.43 K, respectively (Anhalt et al 2006 Metrologia 43 S78-83). The INRIM approach allows the determination of both ITS-90 and thermodynamic temperature of a fixed point in a simple way and can provide valuable support to absolute radiometric methods in defining the transition temperature of new high-temperature fixed points.

  6. Operator mixing in the ɛ -expansion: Scheme and evanescent-operator independence

    NASA Astrophysics Data System (ADS)

    Di Pietro, Lorenzo; Stamou, Emmanuel

    2018-03-01

    We consider theories with fermionic degrees of freedom that have a fixed point of Wilson-Fisher type in noninteger dimension d =4 -2 ɛ . Due to the presence of evanescent operators, i.e., operators that vanish in integer dimensions, these theories contain families of infinitely many operators that can mix with each other under renormalization. We clarify the dependence of the corresponding anomalous-dimension matrix on the choice of renormalization scheme beyond leading order in ɛ -expansion. In standard choices of scheme, we find that eigenvalues at the fixed point cannot be extracted from a finite-dimensional block. We illustrate in examples a truncation approach to compute the eigenvalues. These are observable scaling dimensions, and, indeed, we find that the dependence on the choice of scheme cancels. As an application, we obtain the IR scaling dimension of four-fermion operators in QED in d =4 -2 ɛ at order O (ɛ2).

  7. Brittle Fracture In Disordered Media: A Unified Theory

    NASA Astrophysics Data System (ADS)

    Shekhawat, Ashivni; Zapperi, Stefano; Sethna, James

    2013-03-01

    We present a unified theory of fracture in disordered brittle media that reconciles apparently conflicting results reported in the literature, as well as several experiments on materials ranging from granite to bones. Our renormalization group based approach yields a phase diagram in which the percolation fixed point, expected for infinite disorder, is unstable for finite disorder and flows to a zero-disorder nucleation-type fixed point, thus showing that fracture has mixed first order and continuous character. In a region of intermediate disorder and finite system sizes, we predict a crossover with mean-field avalanche scaling. We discuss intriguing connections to other phenomena where critical scaling is only observed in finite size systems and disappears in the thermodynamic limit. We present a numerical validation of our theoretical results. We acknowledge support from DOE- BES DE-FG02-07ER46393, ERC-AdG-2011 SIZEFFECT, and the NSF through TeraGrid by LONI under grant TG-DMR100025.

  8. Charged fixed point in the Ginzburg-Landau superconductor and the role of the Ginzburg parameter /κ

    NASA Astrophysics Data System (ADS)

    Kleinert, Hagen; Nogueira, Flavio S.

    2003-02-01

    We present a semi-perturbative approach which yields an infrared-stable fixed point in the Ginzburg-Landau for N=2, where N/2 is the number of complex components. The calculations are done in d=3 dimensions and below Tc, where the renormalization group functions can be expressed directly as functions of the Ginzburg parameter κ which is the ratio between the two fundamental scales of the problem, the penetration depth λ and the correlation length ξ. We find a charged fixed point for κ>1/ 2, that is, in the type II regime, where Δκ≡κ-1/ 2 is shown to be a natural expansion parameter. This parameter controls a momentum space instability in the two-point correlation function of the order field. This instability appears at a non-zero wave-vector p0 whose magnitude scales like ˜ Δκ β¯, with a critical exponent β¯=1/2 in the one-loop approximation, a behavior known from magnetic systems with a Lifshitz point in the phase diagram. This momentum space instability is argued to be the origin of the negative η-exponent of the order field.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suo, Tongchuan, E-mail: suotc@physics.umanitoba.ca; Whitmore, Mark D., E-mail: mark-whitmore@umanitoba.ca

    We examine end-tethered polymers in good solvents, using one- and three-dimensional self-consistent field theory, and strong stretching theories. We also discuss different tethering scenarios, namely, mobile tethers, fixed but random ones, and fixed but ordered ones, and the effects and important limitations of including only binary interactions (excluded volume terms). We find that there is a “mushroom” regime in which the layer thickness is independent of the tethering density, σ, for systems with ordered tethers, but we argue that there is no such plateau for mobile or disordered anchors, nor is there one in the 1D theory. In the othermore » limit of brushes, all approaches predict that the layer thickness scales linearly with N. However, the σ{sup 1/3} scaling is a result of keeping only excluded volume interactions: when the full potential is included, the dependence is faster and more complicated than σ{sup 1/3}. In fact, there does not appear to be any regime in which the layer thickness scales in the combination Nσ{sup 1/3}. We also compare the results for two different solvents with each other, and with earlier Θ solvent results.« less

  10. Indirect Determination of the Thermodynamic Temperature of a Gold Fixed-Point Cell

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Girard, F.; Florio, M.

    2010-09-01

    Since the value T 90(Au) was fixed on the ITS-90, some determinations of the thermodynamic temperature of the gold point have been performed which form, with other renormalized results of previous measurements by radiation thermometry, the basis for the current best estimates of ( T - T 90)Au = 39.9 mK as elaborated by the CCT-WG4. Such a value, even if consistent with the behavior of T - T 90 differences at lower temperatures, is quite influenced by the low values of T Au as determined with few radiometric measurements. At INRIM, an independent indirect determination of the thermodynamic temperature of gold was performed by means of a radiation thermometry approach. A fixed-point technique was used to realize approximated thermodynamic scales from the Zn point up to the Cu point. A Si-based standard radiation thermometer working at 900 nm and 950 nm was used. The low uncertainty presently associated to the thermodynamic temperature of fixed points and the accuracy of INRIM realizations, allowed scales with an uncertainty lower than 0.03 K in terms of the thermodynamic temperature to be realized. A fixed-point cell filled with gold, 99.999 % in purity, was measured, and its freezing temperature was determined by both interpolation and extrapolation. An average T Au = 1337.395 K was found with a combined standard uncertainty of 23 mK. Such a value is 25 mK higher than the presently available value as derived by the CCT-WG4 value of ( T - T 90)Au = 39.9 mK.

  11. Bridging Scales: A Model-Based Assessment of the Technical Tidal-Stream Energy Resource off Massachusetts, USA

    NASA Astrophysics Data System (ADS)

    Cowles, G. W.; Hakim, A.; Churchill, J. H.

    2016-02-01

    Tidal in-stream energy conversion (TISEC) facilities provide a highly predictable and dependable source of energy. Given the economic and social incentives to migrate towards renewable energy sources there has been tremendous interest in the technology. Key challenges to the design process stem from the wide range of problem scales extending from device to array. In the present approach we apply a multi-model approach to bridge the scales of interest and select optimal device geometries to estimate the technical resource for several realistic sites in the coastal waters of Massachusetts, USA. The approach links two computational models. To establish flow conditions at site scales ( 10m), a barotropic setup of the unstructured grid ocean model FVCOM is employed. The model is validated using shipboard and fixed ADCP as well as pressure data. For device scale, the structured multiblock flow solver SUmb is selected. A large ensemble of simulations of 2D cross-flow tidal turbines is used to construct a surrogate design model. The surrogate model is then queried using velocity profiles extracted from the tidal model to determine the optimal geometry for the conditions at each site. After device selection, the annual technical yield of the array is evaluated with FVCOM using a linear momentum actuator disk approach to model the turbines. Results for several key Massachusetts sites including comparison with theoretical approaches will be presented.

  12. Reading intervention with a growth mindset approach improves children's skills.

    PubMed

    Andersen, Simon Calmar; Nielsen, Helena Skyt

    2016-10-25

    Laboratory experiments have shown that parents who believe their child's abilities are fixed engage with their child in unconstructive, performance-oriented ways. We show that children of parents with such "fixed mindsets" have lower reading skills, even after controlling for the child's previous abilities and the parents' socioeconomic status. In a large-scale randomized field trial (N classrooms = 72; N children = 1,587) conducted by public authorities, parents receiving a reading intervention were told about the malleability of their child's reading abilities and how to support their child by praising his/her effort rather than his/her performance. This low-cost intervention increased the reading and writing achievements of all participating children-not least immigrant children with non-Western backgrounds and children with low-educated mothers. As expected, effects were even bigger for parents who before the intervention had a fixed mindset.

  13. New, national bottom-up estimate for tree-based biological ...

    EPA Pesticide Factsheets

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  14. Using Mobile Monitoring to Assess Spatial Variability in Urban Air Pollution Levels: Opportunities and Challenges (Invited)

    NASA Astrophysics Data System (ADS)

    Larson, T.

    2010-12-01

    Measuring air pollution concentrations from a moving platform is not a new idea. Historically, however, most information on the spatial variability of air pollutants have been derived from fixed site networks operating simultaneously over space. While this approach has obvious advantages from a regulatory perspective, with the increasing need to understand ever finer scales of spatial variability in urban pollution levels, the use of mobile monitoring to supplement fixed site networks has received increasing attention. Here we present examples of the use of this approach: 1) to assess existing fixed-site fine particle networks in Seattle, WA, including the establishment of new fixed-site monitoring locations; 2) to assess the effectiveness of a regulatory intervention, a wood stove burning ban, on the reduction of fine particle levels in the greater Puget Sound region; and 3) to assess spatial variability of both wood smoke and mobile source impacts in both Vancouver, B.C. and Tacoma, WA. Deducing spatial information from the inherently spatio-temporal measurements taken from a mobile platform is an area that deserves further attention. We discuss the use of “fuzzy” points to address the fine-scale spatio-temporal variability in the concentration of mobile source pollutants, specifically to deduce the broader distribution and sources of fine particle soot in the summer in Vancouver, B.C. We also discuss the use of principal component analysis to assess the spatial variability in multivariate, source-related features deduced from simultaneous measurements of light scattering, light absorption and particle-bound PAHs in Tacoma, WA. With increasing miniaturization and decreasing power requirements of air monitoring instruments, the number of simultaneous measurements that can easily be made from a mobile platform is rapidly increasing. Hopefully the methods used to design mobile monitoring experiments for differing purposes, and the methods used to interpret those measurements will keep pace.

  15. A coupled PFEM-Eulerian approach for the solution of porous FSI problems

    NASA Astrophysics Data System (ADS)

    Larese, A.; Rossi, R.; Oñate, E.; Idelsohn, S. R.

    2012-12-01

    This paper aims to present a coupled solution strategy for the problem of seepage through a rockfill dam taking into account the free-surface flow within the solid as well as in its vicinity. A combination of a Lagrangian model for the structural behavior and an Eulerian approach for the fluid is used. The particle finite element method is adopted for the evaluation of the structural response, whereas an Eulerian fixed-mesh approach is employed for the fluid. The free surface is tracked by the use of a level set technique. The numerical results are validated with experiments on scale models rockfill dams.

  16. ϕ 3 theory with F4 flavor symmetry in 6 - 2 ɛ dimensions: 3-loop renormalization and conformal bootstrap

    NASA Astrophysics Data System (ADS)

    Pang, Yi; Rong, Junchen; Su, Ning

    2016-12-01

    We consider ϕ 3 theory in 6 - 2 ɛ with F 4 global symmetry. The beta function is calculated up to 3 loops, and a stable unitary IR fixed point is observed. The anomalous dimensions of operators quadratic or cubic in ϕ are also computed. We then employ conformal bootstrap technique to study the fixed point predicted from the perturbative approach. For each putative scaling dimension of ϕ (Δ ϕ ), we obtain the corresponding upper bound on the scaling dimension of the second lowest scalar primary in the 26 representation ( Δ 26 2nd ) which appears in the OPE of ϕ × ϕ. In D = 5 .95, we observe a sharp peak on the upper bound curve located at Δ ϕ equal to the value predicted by the 3-loop computation. In D = 5, we observe a weak kink on the upper bound curve at ( Δ ϕ , Δ 26 2nd ) = (1.6, 4).

  17. Impact of orbit modeling on DORIS station position and Earth rotation estimates

    NASA Astrophysics Data System (ADS)

    Štěpánek, Petr; Rodriguez-Solano, Carlos Javier; Hugentobler, Urs; Filler, Vratislav

    2014-04-01

    The high precision of estimated station coordinates and Earth rotation parameters (ERP) obtained from satellite geodetic techniques is based on the precise determination of the satellite orbit. This paper focuses on the analysis of the impact of different orbit parameterizations on the accuracy of station coordinates and the ERPs derived from DORIS observations. In a series of experiments the DORIS data from the complete year 2011 were processed with different orbit model settings. First, the impact of precise modeling of the non-conservative forces on geodetic parameters was compared with results obtained with an empirical-stochastic modeling approach. Second, the temporal spacing of drag scaling parameters was tested. Third, the impact of estimating once-per-revolution harmonic accelerations in cross-track direction was analyzed. And fourth, two different approaches for solar radiation pressure (SRP) handling were compared, namely adjusting SRP scaling parameter or fixing it on pre-defined values. Our analyses confirm that the empirical-stochastic orbit modeling approach, which does not require satellite attitude information and macro models, results for most of the monitored station parameters in comparable accuracy as the dynamical model that employs precise non-conservative force modeling. However, the dynamical orbit model leads to a reduction of the RMS values for the estimated rotation pole coordinates by 17% for x-pole and 12% for y-pole. The experiments show that adjusting atmospheric drag scaling parameters each 30 min is appropriate for DORIS solutions. Moreover, it was shown that the adjustment of cross-track once-per-revolution empirical parameter increases the RMS of the estimated Earth rotation pole coordinates. With recent data it was however not possible to confirm the previously known high annual variation in the estimated geocenter z-translation series as well as its mitigation by fixing the SRP parameters on pre-defined values.

  18. Objective classification of ecological status in marine water bodies using ecotoxicological information and multivariate analysis.

    PubMed

    Beiras, Ricardo; Durán, Iria

    2014-12-01

    Some relevant shortcomings have been identified in the current approach for the classification of ecological status in marine water bodies, leading to delays in the fulfillment of the Water Framework Directive objectives. Natural variability makes difficult to settle fixed reference values and boundary values for the Ecological Quality Ratios (EQR) for the biological quality elements. Biological responses to environmental degradation are frequently of nonmonotonic nature, hampering the EQR approach. Community structure traits respond only once ecological damage has already been done and do not provide early warning signals. An alternative methodology for the classification of ecological status integrating chemical measurements, ecotoxicological bioassays and community structure traits (species richness and diversity), and using multivariate analyses (multidimensional scaling and cluster analysis), is proposed. This approach does not depend on the arbitrary definition of fixed reference values and EQR boundary values, and it is suitable to integrate nonlinear, sensitive signals of ecological degradation. As a disadvantage, this approach demands the inclusion of sampling sites representing the full range of ecological status in each monitoring campaign. National or international agencies in charge of coastal pollution monitoring have comprehensive data sets available to overcome this limitation.

  19. Fickian dispersion is anomalous

    DOE PAGES

    Cushman, John H.; O’Malley, Dan

    2015-06-22

    The thesis put forward here is that the occurrence of Fickian dispersion in geophysical settings is a rare event and consequently should be labeled as anomalous. What people classically call anomalous is really the norm. In a Lagrangian setting, a process with mean square displacement which is proportional to time is generally labeled as Fickian dispersion. With a number of counter examples we show why this definition is fraught with difficulty. In a related discussion, we show an infinite second moment does not necessarily imply the process is super dispersive. By employing a rigorous mathematical definition of Fickian dispersion wemore » illustrate why it is so hard to find a Fickian process. We go on to employ a number of renormalization group approaches to classify non-Fickian dispersive behavior. Scaling laws for the probability density function for a dispersive process, the distribution for the first passage times, the mean first passage time, and the finite-size Lyapunov exponent are presented for fixed points of both deterministic and stochastic renormalization group operators. The fixed points of the renormalization group operators are p-self-similar processes. A generalized renormalization group operator is introduced whose fixed points form a set of generalized self-similar processes. Finally, power-law clocks are introduced to examine multi-scaling behavior. Several examples of these ideas are presented and discussed.« less

  20. Uncovering novel phase structures in \\Box ^k scalar theories with the renormalization group

    NASA Astrophysics Data System (ADS)

    Safari, M.; Vacca, G. P.

    2018-03-01

    We present a detailed version of our recent work on the RG approach to multicritical scalar theories with higher derivative kinetic term φ (-\\Box )^kφ and upper critical dimension d_c = 2nk/(n-1). Depending on whether the numbers k and n have a common divisor two classes of theories have been distinguished. For coprime k and n-1 the theory admits a Wilson-Fisher type fixed point. We derive in this case the RG equations of the potential and compute the scaling dimensions and some OPE coefficients, mostly at leading order in ɛ . While giving new results, the critical data we provide are compared, when possible, and accord with a recent alternative approach using the analytic structure of conformal blocks. Instead when k and n-1 have a common divisor we unveil a novel interacting structure at criticality. \\Box ^2 theories with odd n, which fall in this class, are analyzed in detail. Using the RG flows it is shown that a derivative interaction is unavoidable at the critical point. In particular there is an infrared fixed point with a pure derivative interaction at which we compute the scaling dimensions and, for the particular example of \\Box ^2 theory in d_c=6, also some OPE coefficients.

  1. The evolving Planck mass in classically scale-invariant theories

    NASA Astrophysics Data System (ADS)

    Kannike, K.; Raidal, M.; Spethmann, C.; Veermäe, H.

    2017-04-01

    We consider classically scale-invariant theories with non-minimally coupled scalar fields, where the Planck mass and the hierarchy of physical scales are dynamically generated. The classical theories possess a fixed point, where scale invariance is spontaneously broken. In these theories, however, the Planck mass becomes unstable in the presence of explicit sources of scale invariance breaking, such as non-relativistic matter and cosmological constant terms. We quantify the constraints on such classical models from Big Bang Nucleosynthesis that lead to an upper bound on the non-minimal coupling and require trans-Planckian field values. We show that quantum corrections to the scalar potential can stabilise the fixed point close to the minimum of the Coleman-Weinberg potential. The time-averaged motion of the evolving fixed point is strongly suppressed, thus the limits on the evolving gravitational constant from Big Bang Nucleosynthesis and other measurements do not presently constrain this class of theories. Field oscillations around the fixed point, if not damped, contribute to the dark matter density of the Universe.

  2. Self-consistent field theory of tethered polymers: one dimensional, three dimensional, strong stretching theories and the effects of excluded-volume-only interactions.

    PubMed

    Suo, Tongchuan; Whitmore, Mark D

    2014-11-28

    We examine end-tethered polymers in good solvents, using one- and three-dimensional self-consistent field theory, and strong stretching theories. We also discuss different tethering scenarios, namely, mobile tethers, fixed but random ones, and fixed but ordered ones, and the effects and important limitations of including only binary interactions (excluded volume terms). We find that there is a "mushroom" regime in which the layer thickness is independent of the tethering density, σ, for systems with ordered tethers, but we argue that there is no such plateau for mobile or disordered anchors, nor is there one in the 1D theory. In the other limit of brushes, all approaches predict that the layer thickness scales linearly with N. However, the σ(1/3) scaling is a result of keeping only excluded volume interactions: when the full potential is included, the dependence is faster and more complicated than σ(1/3). In fact, there does not appear to be any regime in which the layer thickness scales in the combination Nσ(1/3). We also compare the results for two different solvents with each other, and with earlier Θ solvent results.

  3. Microfluidic generation of aqueous two-phase system (ATPS) droplets by controlled pulsating inlet pressures.

    PubMed

    Moon, Byeong-Ui; Jones, Steven G; Hwang, Dae Kun; Tsai, Scott S H

    2015-06-07

    We present a technique that generates droplets using ultralow interfacial tension aqueous two-phase systems (ATPS). Our method combines a classical microfluidic flow focusing geometry with precisely controlled pulsating inlet pressure, to form monodisperse ATPS droplets. The dextran (DEX) disperse phase enters through the central inlet with variable on-off pressure cycles controlled by a pneumatic solenoid valve. The continuous phase polyethylene glycol (PEG) solution enters the flow focusing junction through the cross channels at a fixed flow rate. The on-off cycles of the applied pressure, combined with the fixed flow rate cross flow, make it possible for the ATPS jet to break up into droplets. We observe different droplet formation regimes with changes in the applied pressure magnitude and timing, and the continuous phase flow rate. We also develop a scaling model to predict the size of the generated droplets, and the experimental results show a good quantitative agreement with our scaling model. Additionally, we demonstrate the potential for scaling-up of the droplet production rate, with a simultaneous two-droplet generating geometry. We anticipate that this simple and precise approach to making ATPS droplets will find utility in biological applications where the all-biocompatibility of ATPS is desirable.

  4. Pilot Comparison of Radiance Temperature Scale Realization Between NIMT and NMIJ

    NASA Astrophysics Data System (ADS)

    Keawprasert, T.; Yamada, Y.; Ishii, J.

    2015-03-01

    A pilot comparison of radiance temperature scale realizations between the National Institute of Metrology Thailand (NIMT) and the National Metrology Institute of Japan (NMIJ) was conducted. At the two national metrology institutes (NMIs), a 900 nm radiation thermometer, used as the transfer artifact, was calibrated by a means of a multiple fixed-point method using the fixed-point blackbody of Zn, Al, Ag, and Cu points, and by means of relative spectral responsivity measurements according to the International Temperature Scale of 1990 (ITS-90) definition. The Sakuma-Hattori equation is used for interpolating the radiance temperature scale between the four fixed points and also for extrapolating the ITS-90 temperature scale to 2000 C. This paper compares the calibration results in terms of fixed-point measurements, relative spectral responsivity, and finally the radiance temperature scale. Good agreement for the fixed-point measurements was found in case a correction for the change of the internal temperature of the artifact was applied using the temperature coefficient measured at the NMIJ. For the realized radiance temperature range from 400 C to 1100 C, the resulting scale differences between the two NMIs are well within the combined scale comparison uncertainty of 0.12 C (). The resulting spectral responsivity measured at the NIMT has a comparable curve to that measured at the NMIJ especially in the out-of-band region, yielding a ITS-90 scale difference within 1.0 C from the Cu point to 2000 C, whereas the realization comparison uncertainty of NIMT and NMIJ combined is 1.2 C () at 2000 C.

  5. Oxygen Transfer in Moving Bed Biofilm Reactor and Integrated Fixed Film Activated Sludge Processes.

    PubMed

    2017-11-17

    A demonstrated approach to design the, so-called, medium-bubble air diffusion network for oxygen transfer into the aerobic zone(s) of moving bed biofilm reactor (MBBR) and integrated fixed-film activated sludge (IFAS) processes is described in this paper. Operational full-scale biological water resource recovery systems treating municipal sewerage demonstrate that medium-bubble air diffusion networks designed using the method presented here provide reliable service. Further improvement is possible, however, as knowledge gaps prevent more rational process designs. Filling such knowledge gaps can potentially result in higher performing and more economical systems. Small-scale system testing demonstrates significant enhancement of oxygen transfer capacity due to the presence of media, but quantification of such effects in full-scale systems is lacking, and is needed. Establishment of the relationship between diffuser submergence, aeration rate, and biofilm carrier fill fraction will enhance MBBR and IFAS aerobic process design, cost, and performance. Limited testing of full-scale systems is available to allow computation of alpha valuess. As with clean water testing of full-scale systems, further full-scale testing under actual operating conditions is required to more fully quantify MBBR and IFAS system oxygen transfer performance under a wide range of operating conditions. Control of MBBR and IFAS aerobic zone oxygen transfer systems can be optimized by recognizing that varying residual dissolved oxygen (DO) concentrations are needed, depending on operating conditions. For example, the DO concentration in the aerobic zone of nitrifying IFAS processes can be lowered during warm weather conditions when greater suspended growth nitrification can occur, resulting in the need for reduced nitrification by the biofilm compartment. Further application of oxygen transfer control approaches used in activated sludge systems to MBBR and IFAS systems, such as ammonia-based oxygen transfer system control, has been demonstrated to further improve MBBR and IFAS system energy-efficiency.

  6. Cost of riparian buffer zones: A comparison of hydrologically adapted site-specific riparian buffers with traditional fixed widths

    NASA Astrophysics Data System (ADS)

    Tiwari, T.; Lundström, J.; Kuglerová, L.; Laudon, H.; Öhman, K.; Ågren, A. M.

    2016-02-01

    Traditional approaches aiming at protecting surface waters from the negative impacts of forestry often focus on retaining fixed width buffer zones around waterways. While this method is relatively simple to design and implement, it has been criticized for ignoring the spatial heterogeneity of biogeochemical processes and biodiversity in the riparian zone. Alternatively, a variable width buffer zone adapted to site-specific hydrological conditions has been suggested to improve the protection of biogeochemical and ecological functions of the riparian zone. However, little is known about the monetary value of maintaining hydrologically adapted buffer zones compared to the traditionally used fixed width ones. In this study, we created a hydrologically adapted buffer zone by identifying wet areas and groundwater discharge hotspots in the riparian zone. The opportunity cost of the hydrologically adapted riparian buffer zones was then compared to that of the fixed width zones in a meso-scale boreal catchment to determine the most economical option of designing riparian buffers. The results show that hydrologically adapted buffer zones were cheaper per hectare than the fixed width ones when comparing the total cost. This was because the hydrologically adapted buffers included more wetlands and low productive forest areas than the fixed widths. As such, the hydrologically adapted buffer zones allows more effective protection of the parts of the riparian zones that are ecologically and biogeochemically important and more sensitive to disturbances without forest landowners incurring any additional cost than fixed width buffers.

  7. Resolution convergence in cosmological hydrodynamical simulations using adaptive mesh refinement

    NASA Astrophysics Data System (ADS)

    Snaith, Owain N.; Park, Changbom; Kim, Juhan; Rosdahl, Joakim

    2018-06-01

    We have explored the evolution of gas distributions from cosmological simulations carried out using the RAMSES adaptive mesh refinement (AMR) code, to explore the effects of resolution on cosmological hydrodynamical simulations. It is vital to understand the effect of both the resolution of initial conditions (ICs) and the final resolution of the simulation. Lower initial resolution simulations tend to produce smaller numbers of low-mass structures. This will strongly affect the assembly history of objects, and has the same effect of simulating different cosmologies. The resolution of ICs is an important factor in simulations, even with a fixed maximum spatial resolution. The power spectrum of gas in simulations using AMR diverges strongly from the fixed grid approach - with more power on small scales in the AMR simulations - even at fixed physical resolution and also produces offsets in the star formation at specific epochs. This is because before certain times the upper grid levels are held back to maintain approximately fixed physical resolution, and to mimic the natural evolution of dark matter only simulations. Although the impact of hold-back falls with increasing spatial and IC resolutions, the offsets in the star formation remain down to a spatial resolution of 1 kpc. These offsets are of the order of 10-20 per cent, which is below the uncertainty in the implemented physics but are expected to affect the detailed properties of galaxies. We have implemented a new grid-hold-back approach to minimize the impact of hold-back on the star formation rate.

  8. Thermal convection of liquid metal in a long inclined cylinder

    NASA Astrophysics Data System (ADS)

    Teimurazov, Andrei; Frick, Peter

    2017-11-01

    The turbulent convection of low-Prandtl-number fluids (Pr=0.0083 ) in a long cylindrical cell, heated at one end face and cooled at the other, inclined to the vertical at angle β , 0 ≤β ≤π /2 with step π /20 , is studied numerically by solving the Oberbeck-Boussinesq equations with the large-eddy-simulation approach for small-scale turbulence. The cylinder length is L =5 D , where D is the diameter. The Rayleigh number, determined by the cylinder diameter, is of the order of 5 ×106 . We show that the structure of the flow strongly depends on the inclination angle. A stable large-scale circulation (LSC) slightly disturbed by small-scale turbulence exists in the horizontal cylinder. The deviation from a horizontal position provides strong amplification of both LSC and small-scale turbulence. The energy of turbulent pulsations increases monotonically with decreasing inclination angle β , matching the energy of the LSC at β ≈π /5 . The intensity of the LSC has a wide, almost flat, maximum for an inclined cylinder and slumps approaching the vertical position, in which the LSC vanishes. The dependence of the Nusselt number on the inclination angle has a maximum at β ≈7 π /20 and generally follows the dependence of the intensity of LSC on the inclination. This indicates that the total heat transport is highly determined by LSC. We examine the applicability of idealized thermal boundary conditions (BCs) for modeling a real experiment with liquid sodium flows. Therefore, the simulations are done with two types of temperature BCs: fixed face temperature and fixed heat flux. The intensity of the LSC is slightly higher in the latter case and leads to a corresponding increase of the Nusselt number and enhancement of temperature pulsations.

  9. Performance, physiological, and oculometer evaluation of VTOL landing displays

    NASA Technical Reports Server (NTRS)

    North, R. A.; Stackhouse, S. P.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.

  10. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  11. Renormalization-group theory for finite-size scaling in extreme statistics

    NASA Astrophysics Data System (ADS)

    Györgyi, G.; Moloney, N. R.; Ozogány, K.; Rácz, Z.; Droz, M.

    2010-04-01

    We present a renormalization-group (RG) approach to explain universal features of extreme statistics applied here to independent identically distributed variables. The outlines of the theory have been described in a previous paper, the main result being that finite-size shape corrections to the limit distribution can be obtained from a linearization of the RG transformation near a fixed point, leading to the computation of stable perturbations as eigenfunctions. Here we show details of the RG theory which exhibit remarkable similarities to the RG known in statistical physics. Besides the fixed points explaining universality, and the least stable eigendirections accounting for convergence rates and shape corrections, the similarities include marginally stable perturbations which turn out to be generic for the Fisher-Tippett-Gumbel class. Distribution functions containing unstable perturbations are also considered. We find that, after a transitory divergence, they return to the universal fixed line at the same or at a different point depending on the type of perturbation.

  12. An individual-based model of skipjack tuna (Katsuwonus pelamis) movement in the tropical Pacific ocean

    NASA Astrophysics Data System (ADS)

    Scutt Phillips, Joe; Sen Gupta, Alex; Senina, Inna; van Sebille, Erik; Lange, Michael; Lehodey, Patrick; Hampton, John; Nicol, Simon

    2018-05-01

    The distribution of marine species is often modeled using Eulerian approaches, in which changes to population density or abundance are calculated at fixed locations in space. Conversely, Lagrangian, or individual-based, models simulate the movement of individual particles moving in continuous space, with broader-scale patterns such as distribution being an emergent property of many, potentially adaptive, individuals. These models offer advantages in examining dynamics across spatiotemporal scales and making comparisons with observations from individual-scale data. Here, we introduce and describe such a model, the Individual-based Kinesis, Advection and Movement of Ocean ANimAls model (Ikamoana), which we use to replicate the movement processes of an existing Eulerian model for marine predators (the Spatial Ecosystem and Population Dynamics Model, SEAPODYM). Ikamoana simulates the movement of either individual or groups of animals by physical ocean currents, habitat-dependent stochastic movements (kinesis), and taxis movements representing active searching behaviours. Applying our model to Pacific skipjack tuna (Katsuwonus pelamis), we show that it accurately replicates the evolution of density distribution simulated by SEAPODYM with low time-mean error and a spatial correlation of density that exceeds 0.96 at all times. We demonstrate how the Lagrangian approach permits easy tracking of individuals' trajectories for examining connectivity between different regions, and show how the model can provide independent estimates of transfer rates between commonly used assessment regions. In particular, we find that retention rates in most assessment regions are considerably smaller (up to a factor of 2) than those estimated by this population of skipjack's primary assessment model. Moreover, these rates are sensitive to ocean state (e.g. El Nino vs La Nina) and so assuming fixed transfer rates between regions may lead to spurious stock estimates. A novel feature of the Lagrangian approach is that individual schools can be tracked through time, and we demonstrate that movement between two assessment regions at broad temporal scales includes extended transits through other regions at finer-scales. Finally, we discuss the utility of this modeling framework for the management of marine reserves, designing effective monitoring programmes, and exploring hypotheses regarding the behaviour of hard-to-observe oceanic animals.

  13. Multi-Scale Distributed Representation for Deep Learning and its Application to b-Jet Tagging

    NASA Astrophysics Data System (ADS)

    Lee, Jason Sang Hun; Park, Inkyu; Park, Sangnam

    2018-06-01

    Recently machine learning algorithms based on deep layered artificial neural networks (DNNs) have been applied to a wide variety of high energy physics problems such as jet tagging or event classification. We explore a simple but effective preprocessing step which transforms each realvalued observational quantity or input feature into a binary number with a fixed number of digits. Each binary digit represents the quantity or magnitude in different scales. We have shown that this approach improves the performance of DNNs significantly for some specific tasks without any further complication in feature engineering. We apply this multi-scale distributed binary representation to deep learning on b-jet tagging using daughter particles' momenta and vertex information.

  14. Body frame close coupling wave packet approach to gas phase atom-rigid rotor inelastic collisions

    NASA Technical Reports Server (NTRS)

    Sun, Y.; Judson, R. S.; Kouri, D. J.

    1989-01-01

    The close coupling wave packet (CCWP) method is formulated in a body-fixed representation for atom-rigid rotor inelastic scattering. For J greater than j-max (where J is the total angular momentum and j is the rotational quantum number), the computational cost of propagating the coupled channel wave packets in the body frame is shown to scale approximately as N exp 3/2, where N is the total number of channels. For large numbers of channels, this will be much more efficient than the space frame CCWP method previously developed which scales approximately as N-squared under the same conditions.

  15. Universality of modular symmetries in two-dimensional magnetotransport

    NASA Astrophysics Data System (ADS)

    Olsen, K. S.; Limseth, H. S.; Lütken, C. A.

    2018-01-01

    We analyze experimental quantum Hall data from a wide range of different materials, including semiconducting heterojunctions, thin films, surface layers, graphene, mercury telluride, bismuth antimonide, and black phosphorus. The fact that these materials have little in common, except that charge transport is effectively two-dimensional, shows how robust and universal the quantum Hall phenomenon is. The scaling and fixed point data we analyzed appear to show that magnetotransport in two dimensions is governed by a small number of universality classes that are classified by modular symmetries, which are infinite discrete symmetries not previously seen in nature. The Hall plateaux are (infrared) stable fixed points of the scaling-flow, and quantum critical points (where the wave function is delocalized) are unstable fixed points of scaling. Modular symmetries are so rigid that they in some cases fix the global geometry of the scaling flow, and therefore predict the exact location of quantum critical points, as well as the shape of flow lines anywhere in the phase diagram. We show that most available experimental quantum Hall scaling data are in good agreement with these predictions.

  16. Polymer density functional theory approach based on scaling second-order direct correlation function.

    PubMed

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  17. Solution of effective Hamiltonian of impurity hopping between two sites in a metal

    NASA Astrophysics Data System (ADS)

    Ye, Jinwu

    1998-03-01

    We analyze in detail all the possible fixed points of the effective Hamiltonian of a non-magnetic impurity hopping between two sites in a metal obtained by Moustakas and Fisher(MF). We find a line of non-fermi liquid fixed points which continuously interpolates between the 2-channel Kondo fixed point(2CK) and the one channel, two impurity Kondo (2IK) fixed point. There is one relevant direction with scaling dimension 1/2 and one leading irrelevant operator with dimension 3/2. There is also one marginal operator in the spin sector moving along this line. The additional non-fermi liquid fixed point found by MF has the same symmetry as the 2IK, it has two relevant directions with scaling dimension 1/2, therefore also unstable. The system is shown to flow to a line of fermi-liquid fixed points which continuously interpolates between the non-interacting fixed point and the 2 channel spin-flavor Kondo fixed point (2CSFK) discussed by the author previously. The effect of particle-hole symmetry breaking is discussed. The effective Hamiltonian in the external magnetic field is analysed. The scaling functions for the physical measurable quantities are derived in the different regimes; their predictions for the experiments are given. Finally the implications are given for a non-magnetic impurity hopping around three sites with triangular symmetry discussed by MF.

  18. Wind tunnel investigation of rotor lift and propulsive force at high speed: Data analysis

    NASA Technical Reports Server (NTRS)

    Mchugh, F.; Clark, R.; Soloman, M.

    1977-01-01

    The basic test data obtained during the lift-propulsive force limit wind tunnel test conducted on a scale model CH-47b rotor are analyzed. Included are the rotor control positions, blade loads and six components of rotor force and moment, corrected for hub tares. Performance and blade loads are presented as the rotor lift limit is approached at fixed levels of rotor propulsive force coefficients and rotor tip speeds. Performance and blade load trends are documented for fixed levels of rotor lift coefficient as propulsive force is increased to the maximum obtainable by the model rotor. Test data is also included that defines the effect of stall proximity on rotor control power. The basic test data plots are presented in volumes 2 and 3.

  19. Heterogeneous environments shape invader impacts: integrating environmental, structural and functional effects by isoscapes and remote sensing.

    PubMed

    Hellmann, Christine; Große-Stoltenberg, André; Thiele, Jan; Oldeland, Jens; Werner, Christiane

    2017-06-23

    Spatial heterogeneity of ecosystems crucially influences plant performance, while in return plant feedbacks on their environment may increase heterogeneous patterns. This is of particular relevance for exotic plant invaders that transform native ecosystems, yet, approaches integrating geospatial information of environmental heterogeneity and plant-plant interaction are lacking. Here, we combined remotely sensed information of site topography and vegetation cover with a functional tracer of the N cycle, δ 15 N. Based on the case study of the invasion of an N 2 -fixing acacia in a nutrient-poor dune ecosystem, we present the first model that can successfully predict (R 2  = 0.6) small-scale spatial variation of foliar δ 15 N in a non-fixing native species from observed geospatial data. Thereby, the generalized additive mixed model revealed modulating effects of heterogeneous environments on invader impacts. Hence, linking remote sensing techniques with tracers of biological processes will advance our understanding of the dynamics and functioning of spatially structured heterogeneous systems from small to large spatial scales.

  20. Scale-chiral symmetry, ω meson, and dense baryonic matter

    NASA Astrophysics Data System (ADS)

    Ma, Yong-Liang; Rho, Mannque

    2018-05-01

    It is shown that explicitly broken scale symmetry is essential for dense skyrmion matter in hidden local symmetry theory. Consistency with the vector manifestation fixed point for the hidden local symmetry of the lowest-lying vector mesons and the dilaton limit fixed point for scale symmetry in dense matter is found to require that the anomalous dimension (|γG2| ) of the gluon field strength tensor squared (G2 ) that represents the quantum trace anomaly should be 1.0 ≲|γG2|≲3.5 . The magnitude of |γG2| estimated here will be useful for studying hadron and nuclear physics based on the scale-chiral effective theory. More significantly, that the dilaton limit fixed point can be arrived at with γG2≠0 at some high density signals that scale symmetry can arise in dense medium as an "emergent" symmetry.

  1. Optimization of fixed-range trajectories for supersonic transport aircraft

    NASA Astrophysics Data System (ADS)

    Windhorst, Robert Dennis

    1999-11-01

    This thesis develops near-optimal guidance laws that generate minimum fuel, time, or direct operating cost fixed-range trajectories for supersonic transport aircraft. The approach uses singular perturbation techniques to time-scale de-couple the equations of motion into three sets of dynamics, two of which are analyzed in the main body of this thesis and one of which is analyzed in the Appendix. The two-point-boundary-value-problems obtained by application of the maximum principle to the dynamic systems are solved using the method of matched asymptotic expansions. Finally, the two solutions are combined using the matching principle and an additive composition rule to form a uniformly valid approximation of the full fixed-range trajectory. The approach is used on two different time-scale formulations. The first holds weight constant, and the second allows weight and range dynamics to propagate on the same time-scale. Solutions for the first formulation are only carried out to zero order in the small parameter, while solutions for the second formulation are carried out to first order. Calculations for a HSCT design were made to illustrate the method. Results show that the minimum fuel trajectory consists of three segments: a minimum fuel energy-climb, a cruise-climb, and a minimum drag glide. The minimum time trajectory also has three segments: a maximum dynamic pressure ascent, a constant altitude cruise, and a maximum dynamic pressure glide. The minimum direct operating cost trajectory is an optimal combination of the two. For realistic costs of fuel and flight time, the minimum direct operating cost trajectory is very similar to the minimum fuel trajectory. Moreover, the HSCT has three local optimum cruise speeds, with the globally optimum cruise point at the highest allowable speed, if range is sufficiently long. The final range of the trajectory determines which locally optimal speed is best. Ranges of 500 to 6,000 nautical miles, subsonic and supersonic mixed flight, and varying fuel efficiency cases are analyzed. Finally, the payload-range curve of the HSCT design is determined.

  2. Role of interstitial branching in the development of visual corticocortical connections: a time-lapse and fixed-tissue analysis.

    PubMed

    Ruthazer, Edward S; Bachleda, Amelia R; Olavarria, Jaime F

    2010-12-15

    We combined fixed-tissue and time-lapse analyses to investigate the axonal branching phenomena underlying the development of topographically organized ipsilateral projections from area 17 to area 18a in the rat. These complementary approaches allowed us to relate static, large-scale information provided by traditional fixed-tissue analysis to highly dynamic, local, small-scale branching phenomena observed with two-photon time-lapse microscopy in acute slices of visual cortex. Our fixed-tissue data revealed that labeled area 17 fibers invaded area 18a gray matter at topographically restricted sites, reaching superficial layers in significant numbers by postnatal day 6 (P6). Moreover, most parental axons gave rise to only one or occasionally a small number of closely spaced interstitial branches beneath 18a. Our time-lapse data showed that many filopodium-like branches emerged along parental axons in white matter or deep layers in area 18a. Most of these filopodial branches were transient, often disappearing after several minutes to hours of exploratory extension and retraction. These dynamic behaviors decreased significantly from P4, when the projection is first forming, through the second postnatal week, suggesting that the expression of, or sensitivity to, cortical cues promoting new branch addition in the white matter is developmentally down-regulated coincident with gray matter innervation. Together, these data demonstrate that the development of topographically organized corticocortical projections in rats involves extensive exploratory branching along parental axons and invasion of cortex by only a small number of interstitial branches, rather than the widespread innervation of superficial cortical layers by an initially exuberant population of branches. © 2010 Wiley-Liss, Inc.

  3. Role of Interstitial Branching in the Development of Visual Corticocortical Connections: A Time-Lapse and Fixed-Tissue Analysis

    PubMed Central

    Ruthazer, Edward S.; Bachleda, Amelia R.; Olavarria, Jaime F.

    2013-01-01

    We combined fixed-tissue and time-lapse analyses to investigate the axonal branching phenomena underlying the development of topographically organized ipsilateral projections from area 17 to area 18a in the rat. These complementary approaches allowed us to relate static, large-scale information provided by traditional fixed-tissue analysis to highly dynamic, local, small-scale branching phenomena observed with two-photon time-lapse microscopy in acute slices of visual cortex. Our fixed-tissue data revealed that labeled area 17 fibers invaded area 18a gray matter at topographically restricted sites, reaching superficial layers in significant numbers by postnatal day 6 (P6). Moreover, most parental axons gave rise to only one or occasionally a small number of closely spaced interstitial branches beneath 18a. Our time-lapse data showed that many filopodium-like branches emerged along parental axons in white matter or deep layers in area 18a. Most of these filopo-dial branches were transient, often disappearing after several minutes to hours of exploratory extension and retraction. These dynamic behaviors decreased significantly from P4, when the projection is first forming, through the second postnatal week, suggesting that the expression of, or sensitivity to, cortical cues promoting new branch addition in the white matter is developmentally down-regulated coincident with gray matter innervation. Together, these data demonstrate that the development of topographically organized corticocortical projections in rats involves extensive exploratory branching along parental axons and invasion of cortex by only a small number of interstitial branches, rather than the widespread innervation of superficial cortical layers by an initially exuberant population of branches. PMID:21031561

  4. The four fixed points of scale invariant single field cosmological models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xue, BingKan, E-mail: bxue@princeton.edu

    2012-10-01

    We introduce a new set of flow parameters to describe the time dependence of the equation of state and the speed of sound in single field cosmological models. A scale invariant power spectrum is produced if these flow parameters satisfy specific dynamical equations. We analyze the flow of these parameters and find four types of fixed points that encompass all known single field models. Moreover, near each fixed point we uncover new models where the scale invariance of the power spectrum relies on having simultaneously time varying speed of sound and equation of state. We describe several distinctive new modelsmore » and discuss constraints from strong coupling and superluminality.« less

  5. Atomic Approaches to Defect Thermochemistry

    DTIC Science & Technology

    1992-04-30

    from the enthalpy of melting of ison with real experiments by a factor of Au to be 29 meV. (We have checked that the 2.1x10 3; the time scale of the...Diffusion and to Map Vacancy Concentrations at a Fixed Time V. Studies of Electroluminescent Flat-Panel Display Devices VI. Defect Characterization VII...kT), where n = ND - NA is the doping density, about the same time that P. Mei et al. published the first experimental report of this effect (Appl. Phys

  6. Tensor-entanglement-filtering renormalization approach and symmetry-protected topological order

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu Zhengcheng; Wen Xiaogang

    2009-10-15

    We study the renormalization group flow of the Lagrangian for statistical and quantum systems by representing their path integral in terms of a tensor network. Using a tensor-entanglement-filtering renormalization approach that removes local entanglement and produces a coarse-grained lattice, we show that the resulting renormalization flow of the tensors in the tensor network has a nice fixed-point structure. The isolated fixed-point tensors T{sub inv} plus the symmetry group G{sub sym} of the tensors (i.e., the symmetry group of the Lagrangian) characterize various phases of the system. Such a characterization can describe both the symmetry breaking phases and topological phases, asmore » illustrated by two-dimensional (2D) statistical Ising model, 2D statistical loop-gas model, and 1+1D quantum spin-1/2 and spin-1 models. In particular, using such a (G{sub sym},T{sub inv}) characterization, we show that the Haldane phase for a spin-1 chain is a phase protected by the time-reversal, parity, and translation symmetries. Thus the Haldane phase is a symmetry-protected topological phase. The (G{sub sym},T{sub inv}) characterization is more general than the characterizations based on the boundary spins and string order parameters. The tensor renormalization approach also allows us to study continuous phase transitions between symmetry breaking phases and/or topological phases. The scaling dimensions and the central charges for the critical points that describe those continuous phase transitions can be calculated from the fixed-point tensors at those critical points.« less

  7. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE PAGES

    An, X.; Mesterházy, D.; Stephanov, M. A.

    2016-07-08

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  8. Functional renormalization group approach to the Yang-Lee edge singularity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, X.; Mesterházy, D.; Stephanov, M. A.

    Here, we determine the scaling properties of the Yang-Lee edge singularity as described by a one-component scalar field theory with imaginary cubic coupling, using the nonperturbative functional renormalization group in 3 ≤ d ≤ 6 Euclidean dimensions. We find very good agreement with high-temperature series data in d = 3 dimensions and compare our results to recent estimates of critical exponents obtained with the four-loop ϵ = 6 - d expansion and the conformal bootstrap. The relevance of operator insertions at the corresponding fixed point of the RG β functions is discussed and we estimate the error associated with O(∂more » 4) truncations of the scale-dependent effective action.« less

  9. Quantum-gravity predictions for the fine-structure constant

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Held, Aaron; Wetterich, Christof

    2018-07-01

    Asymptotically safe quantum fluctuations of gravity can uniquely determine the value of the gauge coupling for a large class of grand unified models. In turn, this makes the electromagnetic fine-structure constant calculable. The balance of gravity and matter fluctuations results in a fixed point for the running of the gauge coupling. It is approached as the momentum scale is lowered in the transplanckian regime, leading to a uniquely predicted value of the gauge coupling at the Planck scale. The precise value of the predicted fine-structure constant depends on the matter content of the grand unified model. It is proportional to the gravitational fluctuation effects for which computational uncertainties remain to be settled.

  10. A New Experiment on Bengali Character Recognition

    NASA Astrophysics Data System (ADS)

    Barman, Sumana; Bhattacharyya, Debnath; Jeon, Seung-Whan; Kim, Tai-Hoon; Kim, Haeng-Kon

    This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, an artificial neural network is chosen for the training and classification process.

  11. Fairness in optimizing bus-crew scheduling process.

    PubMed

    Ma, Jihui; Song, Cuiying; Ceder, Avishai Avi; Liu, Tao; Guan, Wei

    2017-01-01

    This work proposes a model considering fairness in the problem of crew scheduling for bus drivers (CSP-BD) using a hybrid ant-colony optimization (HACO) algorithm to solve it. The main contributions of this work are the following: (a) a valid approach for cases with a special cost structure and constraints considering the fairness of working time and idle time; (b) an improved algorithm incorporating Gamma heuristic function and selecting rules. The relationships of each cost are examined with ten bus lines collected from the Beijing Public Transport Holdings (Group) Co., Ltd., one of the largest bus transit companies in the world. It shows that unfair cost is indirectly related to common cost, fixed cost and extra cost and also the unfair cost approaches to common and fixed cost when its coefficient is twice of common cost coefficient. Furthermore, the longest time for the tested bus line with 1108 pieces, 74 blocks is less than 30 minutes. The results indicate that the HACO-based algorithm can be a feasible and efficient optimization technique for CSP-BD, especially with large scale problems.

  12. Scattering Solar Thermal Concentrators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giebink, Noel C.

    2015-01-31

    This program set out to explore a scattering-based approach to concentrate sunlight with the aim of improving collector field reliability and of eliminating wind loading and gross mechanical movement through the use of a stationary collection optic. The approach is based on scattering sunlight from the focal point of a fixed collection optic into the confined modes of a sliding planar waveguide, where it is transported to stationary tubular heat transfer elements located at the edges. Optical design for the first stage of solar concentration, which entails focusing sunlight within a plane over a wide range of incidence angles (>120more » degree full field of view) at fixed tilt, led to the development of a new, folded-path collection optic that dramatically out-performs the current state-of-the-art in scattering concentration. Rigorous optical simulation and experimental testing of this collection optic have validated its performance. In the course of this work, we also identified an opportunity for concentrating photovoltaics involving the use of high efficiency microcells made in collaboration with partners at the University of Illinois. This opportunity exploited the same collection optic design as used for the scattering solar thermal concentrator and was therefore pursued in parallel. This system was experimentally demonstrated to achieve >200x optical concentration with >70% optical efficiency over a full day by tracking with <1 cm of lateral movement at fixed latitude tilt. The entire scattering concentrator waveguide optical system has been simulated, tested, and assembled at small scale to verify ray tracing models. These models were subsequently used to predict the full system optical performance at larger, deployment scale ranging up to >1 meter aperture width. Simulations at an aperture widths less than approximately 0.5 m with geometric gains ~100x predict an overall optical efficiency in the range 60-70% for angles up to 50 degrees from normal. However, the concentrator optical efficiency was found to decrease significantly with increasing aperture width beyond 0.5 m due to parasitic waveguide out-coupling loss and low-level absorption that become dominant at larger scale. A heat transfer model was subsequently implemented to predict collector fluid heat gain and outlet temperature as a function of flow rate using the optical model as a flux input. It was found that the aperture width size limitation imposed by the optical efficiency characteristics of the waveguide limits the absolute optical power delivered to the heat transfer element per unit length. As compared to state-of-the-art parabolic trough CPV system aperture widths approaching 5 m, this limitation leads to an approximate factor of order of magnitude increase in heat transfer tube length to achieve the same heat transfer fluid outlet temperature. The conclusion of this work is that scattering solar thermal concentration cannot be implemented at the scale and efficiency required to compete with the performance of current parabolic trough CSP systems. Applied within the alternate context of CPV, however, the results of this work have likely opened up a transformative new path that enables quasi-static, high efficiency CPV to be implemented on rooftops in the form factor of traditional fixed-panel photovoltaics.« less

  13. Scaling fixed-field alternating gradient accelerators with a small orbit excursion.

    PubMed

    Machida, Shinji

    2009-10-16

    A novel scaling type of fixed-field alternating gradient (FFAG) accelerator is proposed that solves the major problems of conventional scaling and nonscaling types. This scaling FFAG accelerator can achieve a much smaller orbit excursion by taking a larger field index k. A triplet focusing structure makes it possible to set the operating point in the second stability region of Hill's equation with a reasonable sensitivity to various errors. The orbit excursion is about 5 times smaller than in a conventional scaling FFAG accelerator and the beam size growth due to typical errors is at most 10%.

  14. The role of optimality in characterizing CO2 seepage from geological carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortis, Andrea; Oldenburg, Curtis M.; Benson, Sally M.

    Storage of large amounts of carbon dioxide (CO{sub 2}) in deep geological formations for greenhouse gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this work we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size ofmore » the: (1) region that needs to be monitored; (2) footprint of the measurement approach, and (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO{sub 2} storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO{sub 2} seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO{sub 2} fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO{sub 2} seepage areas.« less

  15. The Fourier Imaging X-ray Spectrometer (FIXS) for the Argentinian, Scout-launched satelite de Aplicaciones Cienficas-1 (SAC-1)

    NASA Technical Reports Server (NTRS)

    Dennis, Brian R.; Crannell, Carol JO; Desai, Upendra D.; Orwig, Larry E.; Kiplinger, Alan L.; Schwartz, Richard A.; Hurford, Gordon J.; Emslie, A. Gordon; Machado, Marcos; Wood, Kent

    1988-01-01

    The Fourier Imaging X-ray Spectrometer (FIXS) is one of four instruments on SAC-1, the Argentinian satellite being proposed for launch by NASA on a Scout rocket in 1992/3. The FIXS is designed to provide solar flare images at X-ray energies between 5 and 35 keV. Observations will be made on arcsecond size scales and subsecond time scales of the processes that modify the electron spectrum and the thermal distribution in flaring magnetic structures.

  16. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  17. Fermion-induced quantum criticality with two length scales in Dirac systems

    NASA Astrophysics Data System (ADS)

    Torres, Emilio; Classen, Laura; Herbut, Igor F.; Scherer, Michael M.

    2018-03-01

    The quantum phase transition to a Z3-ordered Kekulé valence bond solid in two-dimensional Dirac semimetals is governed by a fermion-induced quantum critical point, which renders the putatively discontinuous transition continuous. We study the resulting universal critical behavior in terms of a functional RG approach, which gives access to the scaling behavior on the symmetry-broken side of the phase transition, for general dimensions and number of Dirac fermions. In particular, we investigate the emergence of the fermion-induced quantum critical point for spacetime dimensions 2

  18. Is scale-invariance in gauge-Yukawa systems compatible with the graviton?

    NASA Astrophysics Data System (ADS)

    Christiansen, Nicolai; Eichhorn, Astrid; Held, Aaron

    2017-10-01

    We explore whether perturbative interacting fixed points in matter systems can persist under the impact of quantum gravity. We first focus on semisimple gauge theories and show that the leading order gravity contribution evaluated within the functional Renormalization Group framework preserves the perturbative fixed-point structure in these models discovered in [J. K. Esbensen, T. A. Ryttov, and F. Sannino, Phys. Rev. D 93, 045009 (2016)., 10.1103/PhysRevD.93.045009]. We highlight that the quantum-gravity contribution alters the scaling dimension of the gauge coupling, such that the system exhibits an effective dimensional reduction. We secondly explore the effect of metric fluctuations on asymptotically safe gauge-Yukawa systems which feature an asymptotically safe fixed point [D. F. Litim and F. Sannino, J. High Energy Phys. 12 (2014) 178., 10.1007/JHEP12(2014)178]. The same effective dimensional reduction that takes effect in pure gauge theories also impacts gauge-Yukawa systems. There, it appears to lead to a split of the degenerate free fixed point into an interacting infrared attractive fixed point and a partially ultraviolet attractive free fixed point. The quantum-gravity induced infrared fixed point moves towards the asymptotically safe fixed point of the matter system, and annihilates it at a critical value of the gravity coupling. Even after that fixed-point annihilation, graviton effects leave behind new partially interacting fixed points for the matter sector.

  19. Structural and Practical Identifiability Issues of Immuno-Epidemiological Vector-Host Models with Application to Rift Valley Fever.

    PubMed

    Tuncer, Necibe; Gulbudak, Hayriye; Cannataro, Vincent L; Martcheva, Maia

    2016-09-01

    In this article, we discuss the structural and practical identifiability of a nested immuno-epidemiological model of arbovirus diseases, where host-vector transmission rate, host recovery, and disease-induced death rates are governed by the within-host immune system. We incorporate the newest ideas and the most up-to-date features of numerical methods to fit multi-scale models to multi-scale data. For an immunological model, we use Rift Valley Fever Virus (RVFV) time-series data obtained from livestock under laboratory experiments, and for an epidemiological model we incorporate a human compartment to the nested model and use the number of human RVFV cases reported by the CDC during the 2006-2007 Kenya outbreak. We show that the immunological model is not structurally identifiable for the measurements of time-series viremia concentrations in the host. Thus, we study the non-dimensionalized and scaled versions of the immunological model and prove that both are structurally globally identifiable. After fixing estimated parameter values for the immunological model derived from the scaled model, we develop a numerical method to fit observable RVFV epidemiological data to the nested model for the remaining parameter values of the multi-scale system. For the given (CDC) data set, Monte Carlo simulations indicate that only three parameters of the epidemiological model are practically identifiable when the immune model parameters are fixed. Alternatively, we fit the multi-scale data to the multi-scale model simultaneously. Monte Carlo simulations for the simultaneous fitting suggest that the parameters of the immunological model and the parameters of the immuno-epidemiological model are practically identifiable. We suggest that analytic approaches for studying the structural identifiability of nested models are a necessity, so that identifiable parameter combinations can be derived to reparameterize the nested model to obtain an identifiable one. This is a crucial step in developing multi-scale models which explain multi-scale data.

  20. Multiphase flow modelling of volcanic ash particle settling in water using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, C. T.; Collins, G. S.; Piggott, M. D.; Kramer, S. C.; Wilson, C. R. G.

    2013-02-01

    Small-scale experiments of volcanic ash particle settling in water have demonstrated that ash particles can either settle slowly and individually, or rapidly and collectively as a gravitationally unstable ash-laden plume. This has important implications for the emplacement of tephra deposits on the seabed. Numerical modelling has the potential to extend the results of laboratory experiments to larger scales and explore the conditions under which plumes may form and persist, but many existing models are computationally restricted by the fixed mesh approaches that they employ. In contrast, this paper presents a new multiphase flow model that uses an adaptive unstructured mesh approach. As a simulation progresses, the mesh is optimized to focus numerical resolution in areas important to the dynamics and decrease it where it is not needed, thereby potentially reducing computational requirements. Model verification is performed using the method of manufactured solutions, which shows the correct solution convergence rates. Model validation and application considers 2-D simulations of plume formation in a water tank which replicate published laboratory experiments. The numerically predicted settling velocities for both individual particles and plumes, as well as instability behaviour, agree well with experimental data and observations. Plume settling is clearly hindered by the presence of a salinity gradient, and its influence must therefore be taken into account when considering particles in bodies of saline water. Furthermore, individual particles settle in the laminar flow regime while plume settling is shown (by plume Reynolds numbers greater than unity) to be in the turbulent flow regime, which has a significant impact on entrainment and settling rates. Mesh adaptivity maintains solution accuracy while providing a substantial reduction in computational requirements when compared to the same simulation performed using a fixed mesh, highlighting the benefits of an adaptive unstructured mesh approach.

  1. Persistent and contemporaneous effects of job stressors on mental health: a study testing multiple analytic approaches across 13 waves of annually collected cohort data.

    PubMed

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Petrie, Dennis

    2016-11-01

    This study investigated the extent that psychosocial job stressors had lasting effects on a scaled measure of mental health. We applied econometric approaches to a longitudinal cohort to: (1) control for unmeasured individual effects; (2) assess the role of prior (lagged) exposures of job stressors on mental health and (3) the persistence of mental health. We used a panel study with 13 annual waves and applied fixed-effects, first-difference and fixed-effects Arellano-Bond models. The Short Form 36 (SF-36) Mental Health Component Summary score was the outcome variable and the key exposures included: job control, job demands, job insecurity and fairness of pay. Results from the Arellano-Bond models suggest that greater fairness of pay (β-coefficient 0.34, 95% CI 0.23 to 0.45), job control (β-coefficient 0.15, 95% CI 0.10 to 0.20) and job security (β-coefficient 0.37, 95% CI 0.32 to 0.42) were contemporaneously associated with better mental health. Similar results were found for the fixed-effects and first-difference models. The Arellano-Bond model also showed persistent effects of individual mental health, whereby individuals' previous reports of mental health were related to their reporting in subsequent waves. The estimated long-run impact of job demands on mental health increased after accounting for time-related dynamics, while there were more minimal impacts for the other job stressor variables. Our results showed that the majority of the effects of psychosocial job stressors on a scaled measure of mental health are contemporaneous except for job demands where accounting for the lagged dynamics was important. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. a Comparison Between Two Ols-Based Approaches to Estimating Urban Multifractal Parameters

    NASA Astrophysics Data System (ADS)

    Huang, Lin-Shan; Chen, Yan-Guang

    Multifractal theory provides a new spatial analytical tool for urban studies, but many basic problems remain to be solved. Among various pending issues, the most significant one is how to obtain proper multifractal dimension spectrums. If an algorithm is improperly used, the parameter spectrums will be abnormal. This paper is devoted to investigating two ordinary least squares (OLS)-based approaches for estimating urban multifractal parameters. Using empirical study and comparative analysis, we demonstrate how to utilize the adequate linear regression to calculate multifractal parameters. The OLS regression analysis has two different approaches. One is that the intercept is fixed to zero, and the other is that the intercept is not limited. The results of comparative study show that the zero-intercept regression yields proper multifractal parameter spectrums within certain scale range of moment order, while the common regression method often leads to abnormal multifractal parameter values. A conclusion can be reached that fixing the intercept to zero is a more advisable regression method for multifractal parameters estimation, and the shapes of spectral curves and value ranges of fractal parameters can be employed to diagnose urban problems. This research is helpful for scientists to understand multifractal models and apply a more reasonable technique to multifractal parameter calculations.

  3. A phase field approach for multicellular aggregate fusion in biofabrication.

    PubMed

    Yang, Xiaofeng; Sun, Yi; Wang, Qi

    2013-07-01

    We present a modeling and computational approach to study fusion of multicellular aggregates during tissue and organ fabrication, which forms the foundation for the scaffold-less biofabrication of tissues and organs known as bioprinting. It is known as the phase field method, where multicellular aggregates are modeled as mixtures of multiphase complex fluids whose phase mixing or separation is governed by interphase force interactions, mimicking the cell-cell interaction in the multicellular aggregates, and intermediate range interaction mediated by the surrounding hydrogel. The material transport in the mixture is dictated by hydrodynamics as well as forces due to the interphase interactions. In a multicellular aggregate system with fixed number of cells and fixed amount of the hydrogel medium, the effect of cell differentiation, proliferation, and death are neglected in the current model, which can be readily included in the model, and the interaction between different components is dictated by the interaction energy between cell and cell as well as between cell and medium particles, respectively. The modeling approach is applicable to transient simulations of fusion of cellular aggregate systems at the time and length scale appropriate to biofabrication. Numerical experiments are presented to demonstrate fusion and cell sorting during tissue and organ maturation processes in biofabrication.

  4. A Novel Riemannian Metric Based on Riemannian Structure and Scaling Information for Fixed Low-Rank Matrix Completion.

    PubMed

    Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit

    2017-05-01

    Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.

  5. Efficient expansion of mesenchymal stromal cells in a disposable fixed bed culture system.

    PubMed

    Mizukami, Amanda; Orellana, Maristela D; Caruso, Sâmia R; de Lima Prata, Karen; Covas, Dimas T; Swiech, Kamilla

    2013-01-01

    The need for efficient and reliable technologies for clinical-scale expansion of mesenchymal stromal cells (MSC) has led to the use of disposable bioreactors and culture systems. Here, we evaluate the expansion of cord blood-derived MSC in a disposable fixed bed culture system. Starting from an initial cell density of 6.0 × 10(7) cells, after 7 days of culture, it was possible to produce of 4.2(±0.8) × 10(8) cells, which represents a fold increase of 7.0 (±1.4). After enzymatic retrieval from Fibra-Cell disks, the cells were able to maintain their potential for differentiation into adipocytes and osteocytes and were positive for many markers common to MSC (CD73, CD90, and CD105). The results obtained in this study demonstrate that MSC can be efficiently expanded in the culture system. This novel approach presents several advantages over the current expansion systems, based on culture flasks or microcarrier-based spinner flasks and represents a key element for MSC cellular therapy according to GMP compliant clinical-scale production system. Copyright © 2013 American Institute of Chemical Engineers.

  6. Breakthrough behavior of granular ferric hydroxide (GFH) fixed-bed adsorption filters: modeling and experimental approaches.

    PubMed

    Sperlich, Alexander; Werner, Arne; Genz, Arne; Amy, Gary; Worch, Eckhard; Jekel, Martin

    2005-03-01

    Breakthrough curves (BTC) for the adsorption of arsenate and salicylic acid onto granulated ferric hydroxide (GFH) in fixed-bed adsorbers were experimentally determined and modeled using the homogeneous surface diffusion model (HSDM). The input parameters for the HSDM, the Freundlich isotherm constants and mass transfer coefficients for film and surface diffusion, were experimentally determined. The BTC for salicylic acid revealed a shape typical for trace organic compound adsorption onto activated carbon, and model results agreed well with the experimental curves. Unlike salicylic acid, arsenate BTCs showed a non-ideal shape with a leveling off at c/c0 approximately 0.6. Model results based on the experimentally derived parameters over-predicted the point of arsenic breakthrough for all simulated curves, lab-scale or full-scale, and were unable to catch the shape of the curve. The use of a much lower surface diffusion coefficient D(S) for modeling led to an improved fit of the later stages of the BTC shape, pointing on a time-dependent D(S). The mechanism for this time dependence is still unknown. Surface precipitation was discussed as one possible removal mechanism for arsenate besides pure adsorption interfering the determination of Freundlich constants and D(S). Rapid small-scale column tests (RSSCT) proved to be a powerful experimental alternative to the modeling procedure for arsenic.

  7. Pharmacokinetic evaluation of avicularin using a model-based development approach.

    PubMed

    Buqui, Gabriela Amaral; Gouvea, Dayana Rubio; Sy, Sherwin K B; Voelkner, Alexander; Singh, Ravi S P; da Silva, Denise Brentan; Kimura, Elza; Derendorf, Hartmut; Lopes, Norberto Peporine; Diniz, Andrea

    2015-03-01

    The aim of this study was to use the pharmacokinetic information of avicularin in rats to project a dose for humans using allometric scaling. A highly sensitive and specific bioanalytical assay to determine avicularin concentrations in the plasma was developed and validated for UPLC-MS/MS. The plasma protein binding of avicularin in rat plasma determined by the ultrafiltration method was 64%. The pharmacokinetics of avicularin in nine rats was studied following an intravenous bolus administration of 1 mg/kg and was found to be best described by a two-compartment model using a nonlinear mixed effects modeling approach. The pharmacokinetic parameters were allometrically scaled by body weight and centered to the median rat weight of 0.23 kg, with the power coefficient fixed at 0.75 for clearance and 1 for volume parameters. Avicularin was rapidly eliminated from the systemic circulation within 1 h post-dose, and the avicularin pharmacokinetic was linear up to 5 mg/kg based on exposure comparison to literature data for a 5-mg/kg single dose in rats. Using allometric scaling and Monte Carlo simulation approaches, the rat doses of 1 and 5 mg/kg correspond to the human equivalent doses of 30 and 150 mg, respectively, to achieve comparable plasma avicularin concentrations in humans. Georg Thieme Verlag KG Stuttgart · New York.

  8. Power-law weighted networks from local attachments

    NASA Astrophysics Data System (ADS)

    Moriano, P.; Finke, J.

    2012-07-01

    This letter introduces a mechanism for constructing, through a process of distributed decision-making, substrates for the study of collective dynamics on extended power-law weighted networks with both a desired scaling exponent and a fixed clustering coefficient. The analytical results show that the connectivity distribution converges to the scaling behavior often found in social and engineering systems. To illustrate the approach of the proposed framework we generate network substrates that resemble steady state properties of the empirical citation distributions of i) publications indexed by the Institute for Scientific Information from 1981 to 1997; ii) patents granted by the U.S. Patent and Trademark Office from 1975 to 1999; and iii) opinions written by the Supreme Court and the cases they cite from 1754 to 2002.

  9. Design and fabrication of a fixed-bed batch type pyrolysis reactor for pilot scale pyrolytic oil production in Bangladesh

    NASA Astrophysics Data System (ADS)

    Aziz, Mohammad Abdul; Al-khulaidi, Rami Ali; Rashid, MM; Islam, M. R.; Rashid, MAN

    2017-03-01

    In this research, a development and performance test of a fixed-bed batch type pyrolysis reactor for pilot scale pyrolysis oil production was successfully completed. The characteristics of the pyrolysis oil were compared to other experimental results. A solid horizontal condenser, a burner for furnace heating and a reactor shield were designed. Due to the pilot scale pyrolytic oil production encountered numerous problems during the plant’s operation. This fixed-bed batch type pyrolysis reactor method will demonstrate the energy saving concept of solid waste tire by creating energy stability. From this experiment, product yields (wt. %) for liquid or pyrolytic oil were 49%, char 38.3 % and pyrolytic gas 12.7% with an operation running time of 185 minutes.

  10. Sensor-driven area coverage for an autonomous fixed-wing unmanned aerial vehicle.

    PubMed

    Paull, Liam; Thibault, Carl; Nagaty, Amr; Seto, Mae; Li, Howard

    2014-09-01

    Area coverage with an onboard sensor is an important task for an unmanned aerial vehicle (UAV) with many applications. Autonomous fixed-wing UAVs are more appropriate for larger scale area surveying since they can cover ground more quickly. However, their non-holonomic dynamics and susceptibility to disturbances make sensor coverage a challenging task. Most previous approaches to area coverage planning are offline and assume that the UAV can follow the planned trajectory exactly. In this paper, this restriction is removed as the aircraft maintains a coverage map based on its actual pose trajectory and makes control decisions based on that map. The aircraft is able to plan paths in situ based on sensor data and an accurate model of the on-board camera used for coverage. An information theoretic approach is used that selects desired headings that maximize the expected information gain over the coverage map. In addition, the branch entropy concept previously developed for autonomous underwater vehicles is extended to UAVs and ensures that the vehicle is able to achieve its global coverage mission. The coverage map over the workspace uses the projective camera model and compares the expected area of the target on the ground and the actual area covered on the ground by each pixel in the image. The camera is mounted on a two-axis gimbal and can either be stabilized or optimized for maximal coverage. Hardware-in-the-loop simulation results and real hardware implementation on a fixed-wing UAV show the effectiveness of the approach. By including the already developed automatic takeoff and landing capabilities, we now have a fully automated and robust platform for performing aerial imagery surveys.

  11. Scalable problems and memory bounded speedup

    NASA Technical Reports Server (NTRS)

    Sun, Xian-He; Ni, Lionel M.

    1992-01-01

    In this paper three models of parallel speedup are studied. They are fixed-size speedup, fixed-time speedup and memory-bounded speedup. The latter two consider the relationship between speedup and problem scalability. Two sets of speedup formulations are derived for these three models. One set considers uneven workload allocation and communication overhead and gives more accurate estimation. Another set considers a simplified case and provides a clear picture on the impact of the sequential portion of an application on the possible performance gain from parallel processing. The simplified fixed-size speedup is Amdahl's law. The simplified fixed-time speedup is Gustafson's scaled speedup. The simplified memory-bounded speedup contains both Amdahl's law and Gustafson's scaled speedup as special cases. This study leads to a better understanding of parallel processing.

  12. Instrumentation for localized superconducting cavity diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conway, Z. A.; Ge, M.; Iwashita, Y.

    2017-01-12

    Superconducting accelerator cavities are now routinely operated at levels approaching the theoretical limit of niobium. To achieve these operating levels more information than is available from the RF excitation signal is required to characterize and determine fixes for the sources of performance limitations. This information is obtained using diagnostic techniques which complement the analysis of the RF signal. In this paper we describe the operation and select results from three of these diagnostic techniques: the use of large scale thermometer arrays, second sound wave defect location and high precision cavity imaging with the Kyoto camera.

  13. Exact renormalization group equation for the Lifshitz critical point

    NASA Astrophysics Data System (ADS)

    Bervillier, C.

    2004-10-01

    An exact renormalization equation (ERGE) accounting for an anisotropic scaling is derived. The critical and tricritical Lifshitz points are then studied at leading order of the derivative expansion which is shown to involve two differential equations. The resulting estimates of the Lifshitz critical exponents compare well with the O(ε) calculations. In the case of the Lifshitz tricritical point, it is shown that a marginally relevant coupling defies the perturbative approach since it actually makes the fixed point referred to in the previous perturbative calculations O(ε) finally unstable.

  14. On the phenomenology of extended Brans-Dicke gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lima, Nelson A.; Ferreira, Pedro G., E-mail: ndal@roe.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk

    We introduce a designer approach for extended Brans-Dicke gravity that allows us to obtain the evolution of the scalar field by fixing the Hubble parameter to that of a w CDM model. We obtain analytical approximations for ϕ as a function of the scale factor and use these to build expressions for the effective Newton's constant at the background and at the linear level and the slip between the perturbed Newtonian potentials. By doing so, we are able to explore their dependence on the fundamental parameters of the theory.

  15. The Approach to Equilibrium in a Reflex Triode

    DTIC Science & Technology

    1990-09-24

    once to yield dT ±---27yT"= ’, where y = d/A and the prime also denotes d/ de . The damping length scale is now just a parameter that can be obtained from...pressures near K2 which will figure in the gap shorting time, and the boundary condition evolution at the anode. Aside from the steady conduction...phase evolution of 77 to values near the sin- gular point, one finds that (from an operational point of view in fixing 3) all such auxillary physics

  16. Multidisciplinary approach to restoring anterior maxillary partial edentulous area using an IPS Empress 2 fixed partial denture: a clinical report.

    PubMed

    Dundar, Mine; Gungor, M Ali; Cal, Ebru

    2003-04-01

    Esthetics is a major concern during restoration of anterior partial edentulous areas. All-ceramic fixed partial dentures may provide better esthetics and biocompatibility in the restoration of anterior teeth. This clinic report describes a multidisciplinary approach and treatment procedures with an IPS Empress 2 fixed partial denture to restore missing anterior teeth.

  17. Fixing a Reference Frame to a Moving and Deforming Continent

    NASA Astrophysics Data System (ADS)

    Blewitt, G.; Kreemer, C.; Hammond, W. C.

    2016-12-01

    The U.S. National Spatial Reference System will be modernized in 2022. A foundational component will be a geocentric reference frame fixed to the North America tectonic plate. Here we address challenges of fixing a reference frame to a moving and deforming continent. Scientific applications motivate that we fix the frame with a scale consistent with the SI system, an origin that coincides with the Earth system's center of mass, and with axes attached to the rigidly rotating interior of the North America plate. Realizing the scale and origin is now achieved to < 0.5 mm/yr by combining space-geodetic techniques (SLR, VLBI, GPS, and DORIS) in the global system, ITRS. To realize the no-net rotation condition, the complexity of plate boundary deformation demands that we only select GPS stations far from plate boundaries. Another problem is that velocity uncertainties in models of glacial isostatic adjustment (GIA) are significant compared to uncertainties in observed velocities. GIA models generally agree that far-field horizontal velocities tend to be directed toward/away from Hudson Bay, depending on mantle viscosity, with uncertain sign and magnitude of velocity. Also in the far field, strain rates tend to be small beyond the peripheral bulge ( US-Canada border). Thus the Earth's crust in the US east of the Rockies may appear to be rigid, even if this region moves relative to plate motion. This can affect Euler vector estimation, with implications (pros and cons) on scientific interpretation. Our previous approach [ref. 1] in defining the NA12 frame was to select a core set of 30 stations east of the Rockies and south of the U.S.-Canada border that satisfy strict criteria on position time series quality. The resulting horizontal velocities have an RMS of 0.3 mm/yr, quantifying a combination of plate rigidity and accuracy. However, this does not rule out possible common-mode motion arising from GIA. For the development of new frame NA16, we consider approaches to this problem. We also apply new techniques including the MIDAS robust velocity estimator [ref. 2] and "GPS Imaging" of vertical motions and strain rates (Fig. 1), which together could assist in better defining "stable North America".[1] Blewitt et al. (2013). J. Geodyn. 72, 11-24, doi:10.1016/j.jog.2013.08.004[2] Blewitt et al. (2016). JGR 121, doi:10.1002/2015JB012552

  18. 14. Detail, typical approach span fixed bearing atop stone masonry ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Detail, typical approach span fixed bearing atop stone masonry pier, view to northwest, 210mm lens. - Southern Pacific Railroad Shasta Route, Bridge No. 210.52, Milepost 210.52, Tehama, Tehama County, CA

  19. Investigations at INRIM on a Pd-C cell manufactured by NPL

    NASA Astrophysics Data System (ADS)

    Battuello, M.; Florio, M.; Machin, G.

    2011-10-01

    One of a set of metal-carbon eutectic cells (a Pd-C cell, 1765 K) manufactured by NPL and used for a previous comparison of temperature scales with NIST has been investigated at INRIM. There it was implemented in two different furnaces, namely a single- and a three-zone, and measured with a standard radiation thermometer operating at 900 nm and 950 nm. Both ITS-90 and thermodynamic melting temperatures of the cell were determined by means of an extrapolation approach. The thermodynamic temperature differs by only -0.31 K from the NIST value whereas the ITS-90 temperature differs by only -0.46 K from the NPL value. The agreements, within the combined expanded uncertainties, are particularly significant, because of the different approach followed at INRIM, namely the extrapolation of multi-fixed-point scales (n = 3 and n = 4), as compared with a direct radiometric method at NIST and an ITS-90 realization traceable to the gold point at NPL.

  20. Cosmic-string-induced hot dark matter perturbations

    NASA Technical Reports Server (NTRS)

    Van Dalen, Anthony

    1990-01-01

    This paper investigates the evolution of initially relativistic matter, radiation, and baryons around cosmic string seed perturbations. A detailed analysis of the linear evolution of spherical perturbations in a universe is carried out, and this formalism is used to study the evolution of perturbations around a sphere of uniform density and fixed radius, approximating a loop of cosmic string. It was found that, on scales less than a few megaparsec, the results agree with the nonrelativistic calculation of previous authors. On greater scales, there is a deviation approaching a factor of 2-3 in the perturbation mass. It is shown that a scenario with cosmic strings, hot dark matter, and a Hubble constant greater than 75 km/sec per Mpc can generally produce structure on the observed mass scales and at the appropriate time: 1 + z = about 4 for galaxies and 1 + z = about 1.5 for Abell clusters.

  1. Large-area Soil Moisture Surveys Using a Cosmic-ray Rover: Approaches and Results from Australia

    NASA Astrophysics Data System (ADS)

    Hawdon, A. A.; McJannet, D. L.; Renzullo, L. J.; Baker, B.; Searle, R.

    2017-12-01

    Recent improvements in satellite instrumentation has increased the resolution and frequency of soil moisture observations, and this in turn has supported the development of higher resolution land surface process models. Calibration and validation of these products is restricted by the mismatch of scales between remotely sensed and contemporary ground based observations. Although the cosmic ray neutron soil moisture probe can provide estimates soil moisture at a scale useful for the calibration and validation purposes, it is spatially limited to a single, fixed location. This scaling issue has been addressed with the development of mobile soil moisture monitoring systems that utilizes the cosmic ray neutron method, typically referred to as a `rover'. This manuscript describes a project designed to develop approaches for undertaking rover surveys to produce soil moisture estimates at scales comparable to satellite observations and land surface process models. A custom designed, trailer-mounted rover was used to conduct repeat surveys at two scales in the Mallee region of Victoria, Australia. A broad scale survey was conducted at 36 x 36 km covering an area of a standard SMAP pixel and an intensive scale survey was conducted over a 10 x 10 km portion of the broad scale survey, which is at a scale equivalent to that used for national water balance modelling. We will describe the design of the rover, the methods used for converting neutron counts into soil moisture and discuss factors controlling soil moisture variability. We found that the intensive scale rover surveys produced reliable soil moisture estimates at 1 km resolution and the broad scale at 9 km resolution. We conclude that these products are well suited for future analysis of satellite soil moisture retrievals and finer scale soil moisture models.

  2. Fixed-bed bioreactor system for the microbial solubilization of coal

    DOEpatents

    Scott, C.D.; Strandberg, G.W.

    1987-09-14

    A fixed-bed bioreactor system for the conversion of coal into microbially solubilized coal products. The fixed-bed bioreactor continuously or periodically receives coal and bio-reactants and provides for the large scale production of microbially solubilized coal products in an economical and efficient manner. An oxidation pretreatment process for rendering coal uniformly and more readily susceptible to microbial solubilization may be employed with the fixed-bed bioreactor. 1 fig., 1 tab.

  3. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    In this study, we introduced an alternative approach for analysis of hydrological flow time series, using an adaptive sampling framework based on inter-amount times (IATs). The main difference with conventional flow time series is the rate at which low and high flows are sampled: the unit of analysis for IATs is a fixed flow amount, instead of a fixed time window. We analysed statistical distributions of flows and IATs across a wide range of sampling scales to investigate sensitivity of statistical properties such as quantiles, variance, skewness, scaling parameters and flashiness indicators to the sampling scale. We did this based on streamflow time series for 17 (semi)urbanised basins in North Carolina, US, ranging from 13 km2 to 238 km2 in size. Results showed that adaptive sampling of flow time series based on inter-amounts leads to a more balanced representation of low flow and peak flow values in the statistical distribution. While conventional sampling gives a lot of weight to low flows, as these are most ubiquitous in flow time series, IAT sampling gives relatively more weight to high flow values, when given flow amounts are accumulated in shorter time. As a consequence, IAT sampling gives more information about the tail of the distribution associated with high flows, while conventional sampling gives relatively more information about low flow periods. We will present results of statistical analyses across a range of subdaily to seasonal scales and will highlight some interesting insights that can be derived from IAT statistics with respect to basin flashiness and impact urbanisation on hydrological response.

  4. A Comparative Experimental Study of Fixed Temperature and Fixed Heat Flux Boundary Conditions in Turbulent Thermal Convection

    NASA Astrophysics Data System (ADS)

    Huang, Shi-Di; Wang, Fei; Xi, Heng-Dong; Xia, Ke-Qing

    2014-11-01

    We report an experimental study of the influences of thermal boundary condition in turbulent thermal convection. Two configurations were examined: one was fixed heat flux at the bottom boundary and fixed temperature at the top (HC cells); the other was fixed temperature at both boundaries (CC cells). It is found that the flow strength in the CC cells is on average 9% larger than that in the HC ones, which could be understood as change in plume emission ability under different boundary conditions. It is further found, rather surprisingly, that flow reversals of the large-scale circulation occur more frequently in the CC cell, despite a stronger large-scale flow and more uniform temperature distribution over the boundaries. These findings provide new insights into turbulent thermal convection and should stimulate further studies, especially experimental ones. This work is supported by the Hong Kong Research Grants Council under Grant No. CUHK 403712.

  5. Fixed, low radiant exposure vs. incremental radiant exposure approach for diode laser hair reduction: a randomized, split axilla, comparative single-blinded trial.

    PubMed

    Pavlović, M D; Adamič, M; Nenadić, D

    2015-12-01

    Diode lasers are the most commonly used treatment modalities for unwanted hair reduction. Only a few controlled clinical trials but not a single randomized controlled trial (RCT) compared the impact of various laser parameters, especially radiant exposure, onto efficacy, tolerability and safety of laser hair reduction. To compare the safety, tolerability and mid-term efficacy of fixed, low and incremental radiant exposures of diode lasers (800 nm) for axillary hair removal, we conducted an intrapatient, left-to-right, patient- and assessor-blinded and controlled trial. Diode laser (800 nm) treatments were evaluated in 39 study participants (skin type II-III) with unwanted axillary hairs. Randomization and allocation to split axilla treatments were carried out by a web-based randomization tool. Six treatments were performed at 4- to 6-week intervals with study subjects blinded to the type of treatment. Final assessment of hair reduction was conducted 6 months after the last treatment by means of blinded 4-point clinical scale using photographs. The primary endpoint was reduction in hair growth, and secondary endpoints were patient-rated tolerability and satisfaction with the treatment, treatment-related pain and adverse effects. Excellent reduction in axillary hairs (≥ 76%) at 6-month follow-up visit after receiving fixed, low and incremental radiant exposure diode laser treatments was obtained in 59% and 67% of study participants respectively (Z value: 1.342, P = 0.180). Patients reported lower visual analogue scale (VAS) pain score on the fixed (4.26) than on the incremental radiant exposure side (5.64) (P < 0.0003). The only side-effect was mild and transient erythema. Subjects better tolerated the fixed, low radiant exposure protocol (P = 0.03). The majority of the study participants were satisfied with both treatments. Both low and incremental radiant exposures produced similar hair reduction and high and comparable patient satisfaction. However, low radiant exposure diode laser treatments were less painful and better tolerated. © 2015 European Academy of Dermatology and Venereology.

  6. Advances in Nonlinear Non-Scaling FFAGs

    NASA Astrophysics Data System (ADS)

    Johnstone, C.; Berz, M.; Makino, K.; Koscielniak, S.; Snopok, P.

    Accelerators are playing increasingly important roles in basic science, technology, and medicine. Ultra high-intensity and high-energy (GeV) proton drivers are a critical technology for accelerator-driven sub-critical reactors (ADS) and many HEP programs (Muon Collider) but remain particularly challenging, encountering duty cycle and space-charge limits in the synchrotron and machine size concerns in the weaker-focusing cyclotrons; a 10-20 MW proton driver is not presently considered technically achievable with conventional re-circulating accelerators. One, as-yet, unexplored re-circulating accelerator, the Fixed-field Alternating Gradient or FFAG, is an attractive alternative to the other approaches to a high-power beam source. Its strong focusing optics can mitigate space charge effects and achieve higher bunch charges than are possible in a cyclotron, and a recent innovation in design has coupled stable tunes with isochronous orbits, making the FFAG capable of fixed-frequency, CW acceleration, as in the classical cyclotron but beyond their energy reach, well into the relativistic regime. This new concept has been advanced in non-scaling nonlinear FFAGs using powerful new methodologies developed for FFAG accelerator design and simulation. The machine described here has the high average current advantage and duty cycle of the cyclotron (without using broadband RF frequencies) in combination with the strong focusing, smaller losses, and energy variability that are more typical of the synchrotron. The current industrial and medical standard is a cyclotron, but a competing CW FFAG could promote a shift in this baseline. This paper reports on these new advances in FFAG accelerator technology and presents advanced modeling tools for fixed-field accelerators unique to the code COSY INFINITY.1

  7. A Parameterized Pattern-Error Objective for Large-Scale Phase-Only Array Pattern Design

    DTIC Science & Technology

    2016-03-21

    12 4.4 Example 3: Sector Beam w/ Nonuniform Amplitude...fixed uniform amplitude illumination, phase-only optimization can also find application to arrays with fixed but nonuniform tapers. Such fixed tapers...arbitrary element locations nonuniform FFT algorithms exist [43–45] that have the same asymptotic complexity as the conventional FFT, although the

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steigies, C. T.; Barjatya, A.

    Langmuir probes are standard instruments for plasma density measurements on many sounding rockets. These probes can be operated in swept-bias as well as in fixed-bias modes. In swept-bias Langmuir probes, contamination effects are frequently visible as a hysteresis between consecutive up and down voltage ramps. This hysteresis, if not corrected, leads to poorly determined plasma densities and temperatures. With a properly chosen sweep function, the contamination parameters can be determined from the measurements and correct plasma parameters can then be determined. In this paper, we study the contamination effects on fixed-bias Langmuir probes, where no hysteresis type effect is seenmore » in the data. Even though the contamination is not evident from the measurements, it does affect the plasma density fluctuation spectrum as measured by the fixed-bias Langmuir probe. We model the contamination as a simple resistor-capacitor circuit between the probe surface and the plasma. We find that measurements of small scale plasma fluctuations (meter to sub-meter scale) along a rocket trajectory are not affected, but the measured amplitude of large scale plasma density variation (tens of meters or larger) is attenuated. From the model calculations, we determine amplitude and cross-over frequency of the contamination effect on fixed-bias probes for different contamination parameters. The model results also show that a fixed bias probe operating in the ion-saturation region is affected less by contamination as compared to a fixed bias probe operating in the electron saturation region.« less

  9. Development and parallelization of a direct numerical simulation to study the formation and transport of nanoparticle clusters in a viscous fluid

    NASA Astrophysics Data System (ADS)

    Sloan, Gregory James

    The direct numerical simulation (DNS) offers the most accurate approach to modeling the behavior of a physical system, but carries an enormous computation cost. There exists a need for an accurate DNS to model the coupled solid-fluid system seen in targeted drug delivery (TDD), nanofluid thermal energy storage (TES), as well as other fields where experiments are necessary, but experiment design may be costly. A parallel DNS can greatly reduce the large computation times required, while providing the same results and functionality of the serial counterpart. A D2Q9 lattice Boltzmann method approach was implemented to solve the fluid phase. The use of domain decomposition with message passing interface (MPI) parallelism resulted in an algorithm that exhibits super-linear scaling in testing, which may be attributed to the caching effect. Decreased performance on a per-node basis for a fixed number of processes confirms this observation. A multiscale approach was implemented to model the behavior of nanoparticles submerged in a viscous fluid, and used to examine the mechanisms that promote or inhibit clustering. Parallelization of this model using a masterworker algorithm with MPI gives less-than-linear speedup for a fixed number of particles and varying number of processes. This is due to the inherent inefficiency of the master-worker approach. Lastly, these separate simulations are combined, and two-way coupling is implemented between the solid and fluid.

  10. High Agreement was Obtained Across Scores from Multiple Equated Scales for Social Anxiety Disorder using Item Response Theory.

    PubMed

    Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim

    2018-04-10

    There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.

  11. Using High Spatial Resolution Satellite Imagery to Map Forest Burn Severity Across Spatial Scales in a Pine Barrens Ecosystem

    NASA Technical Reports Server (NTRS)

    Meng, Ran; Wu, Jin; Schwager, Kathy L.; Zhao, Feng; Dennison, Philip E.; Cook, Bruce D.; Brewster, Kristen; Green, Timothy M.; Serbin, Shawn P.

    2017-01-01

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (less than or equal to 5 m) from very-high-resolution (VHR) data. We assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severity was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal - pre- and post-fire event - WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). This work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the less than 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.

  12. Using high spatial resolution satellite imagery to map forest burn severity across spatial scales in a Pine Barrens ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ran; Wu, Jin; Schwager, Kathy L.

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (≤ 5 m) from very-high-resolution (VHR) data. Here we assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severitymore » was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal — pre- and post-fire event — WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). Lastly, this work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the < 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.« less

  13. Using high spatial resolution satellite imagery to map forest burn severity across spatial scales in a Pine Barrens ecosystem

    DOE PAGES

    Meng, Ran; Wu, Jin; Schwager, Kathy L.; ...

    2017-01-21

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (≤ 5 m) from very-high-resolution (VHR) data. Here we assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severitymore » was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal — pre- and post-fire event — WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). Lastly, this work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the < 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.« less

  14. Lightweight GPS-tags, one giant leap for wildlife tracking? An assessment approach.

    PubMed

    Recio, Mariano R; Mathieu, Renaud; Denys, Paul; Sirguey, Pascal; Seddon, Philip J

    2011-01-01

    Recent technological improvements have made possible the development of lightweight GPS-tagging devices suitable to track medium-to-small sized animals. However, current inferences concerning GPS performance are based on heavier designs, suitable only for large mammals. Lightweight GPS-units are deployed close to the ground, on species selecting micro-topographical features and with different behavioural patterns in comparison to larger mammal species. We assessed the effects of vegetation, topography, motion, and behaviour on the fix success rate for lightweight GPS-collar across a range of natural environments, and at the scale of perception of feral cats (Felis catus). Units deployed at 20 cm above the ground in sites of varied vegetation and topography showed that trees (native forest) and shrub cover had the largest influence on fix success rate (89% on average); whereas tree cover, sky availability, number of satellites and horizontal dilution of position (HDOP) were the main variables affecting location error (±39.5 m and ±27.6 m before and after filtering outlier fixes). Tests on HDOP or number of satellites-based screening methods to remove inaccurate locations achieved only a small reduction of error and discarded many accurate locations. Mobility tests were used to simulate cats' motion, revealing a slightly lower performance as compared to the fixed sites. GPS-collars deployed on 43 cats showed no difference in fix success rate by sex or season. Overall, fix success rate and location error values were within the range of previous tests carried out with collars designed for larger species. Lightweight GPS-tags are a suitable method to track medium to small size species, hence increasing the range of opportunities for spatial ecology research. However, the effects of vegetation, topography and behaviour on location error and fix success rate need to be evaluated prior to deployment, for the particular study species and their habitats.

  15. A Composite Network Approach for Assessing Multi-Species Connectivity: An Application to Road Defragmentation Prioritisation

    PubMed Central

    Saura, Santiago; Rondinini, Carlo

    2016-01-01

    One of the biggest challenges in large-scale conservation is quantifying connectivity at broad geographic scales and for a large set of species. Because connectivity analyses can be computationally intensive, and the planning process quite complex when multiple taxa are involved, assessing connectivity at large spatial extents for many species turns to be often intractable. Such limitation results in that conducted assessments are often partial by focusing on a few key species only, or are generic by considering a range of dispersal distances and a fixed set of areas to connect that are not directly linked to the actual spatial distribution or mobility of particular species. By using a graph theory framework, here we propose an approach to reduce computational effort and effectively consider large assemblages of species in obtaining multi-species connectivity priorities. We demonstrate the potential of the approach by identifying defragmentation priorities in the Italian road network focusing on medium and large terrestrial mammals. We show that by combining probabilistic species graphs prior to conducting the network analysis (i) it is possible to analyse connectivity once for all species simultaneously, obtaining conservation or restoration priorities that apply for the entire species assemblage; and that (ii) those priorities are well aligned with the ones that would be obtained by aggregating the results of separate connectivity analysis for each of the individual species. This approach offers great opportunities to extend connectivity assessments to large assemblages of species and broad geographic scales. PMID:27768718

  16. Locally smeared operator product expansions in scalar field theory

    DOE PAGES

    Monahan, Christopher; Orginos, Kostas

    2015-04-01

    We propose a new locally smeared operator product expansion to decompose non-local operators in terms of a basis of smeared operators. The smeared operator product expansion formally connects nonperturbative matrix elements determined numerically using lattice field theory to matrix elements of non-local operators in the continuum. These nonperturbative matrix elements do not suffer from power-divergent mixing on the lattice, which significantly complicates calculations of quantities such as the moments of parton distribution functions, provided the smearing scale is kept fixed in the continuum limit. The presence of this smearing scale complicates the connection to the Wilson coefficients of the standardmore » operator product expansion and requires the construction of a suitable formalism. We demonstrate the feasibility of our approach with examples in real scalar field theory.« less

  17. The Role of Optimality in Characterizing CO2 Seepage from Geological Carbon Sequestration Sites

    NASA Astrophysics Data System (ADS)

    Cortis, A.; Oldenburg, C. M.; Benson, S. M.

    2007-12-01

    Storage of large amounts of carbon dioxide (CO2) in deep geological formations for greenhouse-gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this talk we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size of the: (1) region that needs to be monitored; (2) footprint of the measurement approach; (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO2 storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO2 seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO2 fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO2 seepage areas. This work was carried out within the ZERT project, funded by the Assistant Secretary for Fossil Energy, Office of Sequestration, Hydrogen, and Clean Coal Fuels, National Energy Technology Laboratory, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  18. Recall of patterns using binary and gray-scale autoassociative morphological memories

    NASA Astrophysics Data System (ADS)

    Sussner, Peter

    2005-08-01

    Morphological associative memories (MAM's) belong to a class of artificial neural networks that perform the operations erosion or dilation of mathematical morphology at each node. Therefore we speak of morphological neural networks. Alternatively, the total input effect on a morphological neuron can be expressed in terms of lattice induced matrix operations in the mathematical theory of minimax algebra. Neural models of associative memories are usually concerned with the storage and the retrieval of binary or bipolar patterns. Thus far, the emphasis in research on morphological associative memory systems has been on binary models, although a number of notable features of autoassociative morphological memories (AMM's) such as optimal absolute storage capacity and one-step convergence have been shown to hold in the general, gray-scale setting. In previous papers, we gained valuable insight into the storage and recall phases of AMM's by analyzing their fixed points and basins of attraction. We have shown in particular that the fixed points of binary AMM's correspond to the lattice polynomials in the original patterns. This paper extends these results in the following ways. In the first place, we provide an exact characterization of the fixed points of gray-scale AMM's in terms of combinations of the original patterns. Secondly, we present an exact expression for the fixed point attractor that represents the output of either a binary or a gray-scale AMM upon presentation of a certain input. The results of this paper are confirmed in several experiments using binary patterns and gray-scale images.

  19. Performance of intraclass correlation coefficient (ICC) as a reliability index under various distributions in scale reliability studies.

    PubMed

    Mehta, Shraddha; Bastero-Caballero, Rowena F; Sun, Yijun; Zhu, Ray; Murphy, Diane K; Hardas, Bhushan; Koch, Gary

    2018-04-29

    Many published scale validation studies determine inter-rater reliability using the intra-class correlation coefficient (ICC). However, the use of this statistic must consider its advantages, limitations, and applicability. This paper evaluates how interaction of subject distribution, sample size, and levels of rater disagreement affects ICC and provides an approach for obtaining relevant ICC estimates under suboptimal conditions. Simulation results suggest that for a fixed number of subjects, ICC from the convex distribution is smaller than ICC for the uniform distribution, which in turn is smaller than ICC for the concave distribution. The variance component estimates also show that the dissimilarity of ICC among distributions is attributed to the study design (ie, distribution of subjects) component of subject variability and not the scale quality component of rater error variability. The dependency of ICC on the distribution of subjects makes it difficult to compare results across reliability studies. Hence, it is proposed that reliability studies should be designed using a uniform distribution of subjects because of the standardization it provides for representing objective disagreement. In the absence of uniform distribution, a sampling method is proposed to reduce the non-uniformity. In addition, as expected, high levels of disagreement result in low ICC, and when the type of distribution is fixed, any increase in the number of subjects beyond a moderately large specification such as n = 80 does not have a major impact on ICC. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Some consequences of shear on galactic dynamos with helicity fluxes

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2017-08-01

    Galactic dynamo models sustained by supernova (SN) driven turbulence and differential rotation have revealed that the sustenance of large-scale fields requires a flux of small-scale magnetic helicity to be viable. Here we generalize a minimalist analytic version of such galactic dynamos to explore some heretofore unincluded contributions from shear on the total turbulent energy and turbulent correlation time, with the helicity fluxes maintained by either winds, diffusion or magnetic buoyancy. We construct an analytic framework for modelling the turbulent energy and correlation time as a function of SN rate and shear. We compare our prescription with previous approaches that include only rotation. The solutions depend separately on the rotation period and the eddy turnover time and not just on their ratio (the Rossby number). We consider models in which these two time-scales are allowed to be independent and also a case in which they are mutually dependent on radius when a radial-dependent SN rate model is invoked. For the case of a fixed rotation period (or a fixed radius), we show that the influence of shear is dramatic for low Rossby numbers, reducing the correlation time of the turbulence, which, in turn, strongly reduces the saturation value of the dynamo compared to the case when the shear is ignored. We also show that even in the absence of winds or diffusive fluxes, magnetic buoyancy may be able to sustain sufficient helicity fluxes to avoid quenching.

  1. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less

  2. Scaling in the vicinity of the four-state Potts fixed point

    NASA Astrophysics Data System (ADS)

    Blöte, H. W. J.; Guo, Wenan; Nightingale, M. P.

    2017-08-01

    We study a self-dual generalization of the Baxter-Wu model, employing results obtained by transfer matrix calculations of the magnetic scaling dimension and the free energy. While the pure critical Baxter-Wu model displays the critical behavior of the four-state Potts fixed point in two dimensions, in the sense that logarithmic corrections are absent, the introduction of different couplings in the up- and down triangles moves the model away from this fixed point, so that logarithmic corrections appear. Real couplings move the model into the first-order range, away from the behavior displayed by the nearest-neighbor, four-state Potts model. We also use complex couplings, which bring the model in the opposite direction characterized by the same type of logarithmic corrections as present in the four-state Potts model. Our finite-size analysis confirms in detail the existing renormalization theory describing the immediate vicinity of the four-state Potts fixed point.

  3. Blending Multiple Nitrogen Dioxide Data Sources for Neighborhood Estimates of Long-Term Exposure for Health Research.

    PubMed

    Hanigan, Ivan C; Williamson, Grant J; Knibbs, Luke D; Horsley, Joshua; Rolfe, Margaret I; Cope, Martin; Barnett, Adrian G; Cowie, Christine T; Heyworth, Jane S; Serre, Marc L; Jalaludin, Bin; Morgan, Geoffrey G

    2017-11-07

    Exposure to traffic related nitrogen dioxide (NO 2 ) air pollution is associated with adverse health outcomes. Average pollutant concentrations for fixed monitoring sites are often used to estimate exposures for health studies, however these can be imprecise due to difficulty and cost of spatial modeling at the resolution of neighborhoods (e.g., a scale of tens of meters) rather than at a coarse scale (around several kilometers). The objective of this study was to derive improved estimates of neighborhood NO 2 concentrations by blending measurements with modeled predictions in Sydney, Australia (a low pollution environment). We implemented the Bayesian maximum entropy approach to blend data with uncertainty defined using informative priors. We compiled NO 2 data from fixed-site monitors, chemical transport models, and satellite-based land use regression models to estimate neighborhood annual average NO 2 . The spatial model produced a posterior probability density function of estimated annual average concentrations that spanned an order of magnitude from 3 to 35 ppb. Validation using independent data showed improvement, with root mean squared error improvement of 6% compared with the land use regression model and 16% over the chemical transport model. These estimates will be used in studies of health effects and should minimize misclassification bias.

  4. Switching to aripiprazole in outpatients with schizophrenia experiencing insufficient efficacy and/or safety/tolerability issues with risperidone: a randomized, multicentre, open-label study.

    PubMed

    Ryckmans, V; Kahn, J P; Modell, S; Werner, C; McQuade, R D; Kerselaers, W; Lissens, J; Sanchez, R

    2009-05-01

    This study evaluated the safety/tolerability and effectiveness of aripiprazole titrated-dose versus fixed-dose switching strategies from risperidone in patients with schizophrenia experiencing insufficient efficacy and/or safety/tolerability issues. Patients were randomized to an aripiprazole titrated-dose (starting dose 5 mg/day) or fixed-dose (dose 15 mg/day) switching strategy with risperidone down-tapering. Primary endpoint was rate of discontinuation due to adverse events (AEs) during the 12-week study. Secondary endpoints included positive and negative syndrome scale (PANSS), clinical global impressions - improvement of illness scale (CGI-I), preference of medication (POM), subjective well-being under neuroleptics (SWN-K) and GEOPTE (Grupo Español para la Optimización del Tratamiento de la Esquizofrenia) scales. Rates of discontinuations due to AEs were similar between titrated-dose and fixed-dose strategies (3.5% vs. 5.0%; p=0.448). Improvements in mean PANSS total scores were similar between aripiprazole titrated-dose and fixed-dose strategies (-14.8 vs. -17.2; LOCF), as were mean CGI-I scores (2.9 vs. 2.8; p=0.425; LOCF) and SWN-K scores (+8.6 vs.+10.3; OC,+7.8 vs.+9.8; LOCF). Switching can be effectively and safely achieved through a titrated-dose or fixed-dose switching strategy for aripiprazole, with down-titration of risperidone.

  5. From particle condensation to polymer aggregation

    NASA Astrophysics Data System (ADS)

    Janke, Wolfhard; Zierenberg, Johannes

    2018-01-01

    We draw an analogy between droplet formation in dilute particle and polymer systems. Our arguments are based on finite-size scaling results from studies of a two-dimensional lattice gas to three-dimensional bead-spring polymers. To set the results in perspective, we compare with in part rigorous theoretical scaling laws for canonical condensation in a supersaturated gas at fixed temperature, and derive corresponding scaling predictions for an undercooled gas at fixed density. The latter allows one to efficiently employ parallel multicanonical simulations and to reach previously not accessible scaling regimes. While the asymptotic scaling can not be observed for the comparably small polymer system sizes, they demonstrate an intermediate scaling regime also observable for particle condensation. Altogether, our extensive results from computer simulations provide clear evidence for the close analogy between particle condensation and polymer aggregation in dilute systems.

  6. Inflation, quintessence, and the origin of mass

    NASA Astrophysics Data System (ADS)

    Wetterich, C.

    2015-08-01

    In a unified picture both inflation and present dynamical dark energy arise from the same scalar field. The history of the Universe describes a crossover from a scale invariant "past fixed point" where all particles are massless, to a "future fixed point" for which spontaneous breaking of the exact scale symmetry generates the particle masses. The cosmological solution can be extrapolated to the infinite past in physical time - the universe has no beginning. This is seen most easily in a frame where particle masses and the Planck mass are field-dependent and increase with time. In this "freeze frame" the Universe shrinks and heats up during radiation and matter domination. In the equivalent, but singular Einstein frame cosmic history finds the familiar big bang description. The vicinity of the past fixed point corresponds to inflation. It ends at a first stage of the crossover. A simple model with no more free parameters than ΛCDM predicts for the primordial fluctuations a relation between the tensor amplitude r and the spectral index n, r = 8.19 (1 - n) - 0.137. The crossover is completed by a second stage where the beyond-standard-model sector undergoes the transition to the future fixed point. The resulting increase of neutrino masses stops a cosmological scaling solution, relating the present dark energy density to the present neutrino mass. At present our simple model seems compatible with all observational tests. We discuss how the fixed points can be rooted within quantum gravity in a crossover between ultraviolet and infrared fixed points. Then quantum properties of gravity could be tested both by very early and late cosmology.

  7. Targeted traction of impacted teeth with C-tube miniplates.

    PubMed

    Chung, Kyu-Rhim; Kim, Yong; Ahn, Hyo-Won; Lee, Dongjoo; Yang, Dong-Min; Kim, Seong-Hun; Nelson, Gerald

    2014-09-01

    Orthodontic traction of impacted teeth has typically been performed using full fixed appliance as anchorage against the traction force. This conventional approach can be difficult to apply in the mixed dentition if the partial fixed appliance offers an insufficient anchor unit. In addition, full fixed appliance can induce unwanted movement of adjacent teeth. This clinical report presents 3 cases where impacted teeth were recovered in the mixed or transitional dentition with skeletal anchorage on the opposite arch without full fixed appliance. Instead, intermaxillary traction was used to bring the impacted teeth into position. With this approach, side effects on teeth and periodontal tissues adjacent to the impaction were minimized.

  8. Excess entropy scaling for the segmental and global dynamics of polyethylene melts.

    PubMed

    Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C

    2014-11-28

    The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.

  9. Network simulations of optical illusions

    NASA Astrophysics Data System (ADS)

    Shinbrot, Troy; Lazo, Miguel Vivar; Siu, Theo

    We examine a dynamical network model of visual processing that reproduces several aspects of a well-known optical illusion, including subtle dependencies on curvature and scale. The model uses a genetic algorithm to construct the percept of an image, and we show that this percept evolves dynamically so as to produce the illusions reported. We find that the perceived illusions are hardwired into the model architecture and we propose that this approach may serve as an archetype to distinguish behaviors that are due to nature (i.e. a fixed network architecture) from those subject to nurture (that can be plastically altered through learning).

  10. Prediction of biochar yield from cattle manure pyrolysis via least squares support vector machine intelligent approach.

    PubMed

    Cao, Hongliang; Xin, Ya; Yuan, Qiaoxia

    2016-02-01

    To predict conveniently the biochar yield from cattle manure pyrolysis, intelligent modeling approach was introduced in this research. A traditional artificial neural networks (ANN) model and a novel least squares support vector machine (LS-SVM) model were developed. For the identification and prediction evaluation of the models, a data set with 33 experimental data was used, which were obtained using a laboratory-scale fixed bed reaction system. The results demonstrated that the intelligent modeling approach is greatly convenient and effective for the prediction of the biochar yield. In particular, the novel LS-SVM model has a more satisfying predicting performance and its robustness is better than the traditional ANN model. The introduction and application of the LS-SVM modeling method gives a successful example, which is a good reference for the modeling study of cattle manure pyrolysis process, even other similar processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  12. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  13. Global Optimization of Emergency Evacuation Assignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Lee; Yuan, Fang; Chin, Shih-Miao

    2006-01-01

    Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less

  14. Realism and Perspectivism: a Reevaluation of Rival Theories of Spatial Vision.

    NASA Astrophysics Data System (ADS)

    Thro, E. Broydrick

    1990-01-01

    My study reevaluates two theories of human space perception, a trigonometric surveying theory I call perspectivism and a "scene recognition" theory I call realism. Realists believe that retinal image geometry can supply no unambiguous information about an object's size and distance--and that, as a result, viewers can locate objects in space only by making discretionary interpretations based on familiar experience of object types. Perspectivists, in contrast, think viewers can disambiguate object sizes/distances on the basis of retinal image information alone. More specifically, they believe the eye responds to perspective image geometry with an automatic trigonometric calculation that not only fixes the directions and shapes, but also roughly fixes the sizes and distances of scene elements in space. Today this surveyor theory has been largely superceded by the realist approach, because most vision scientists believe retinal image geometry is ambiguous about the scale of space. However, I show that there is a considerable body of neglected evidence, both past and present, tending to call this scale ambiguity claim into question. I maintain that this evidence against scale ambiguity could hardly be more important, if one considers its subversive implications for the scene recognition theory that is not only today's reigning approach to spatial vision, but also the foundation for computer scientists' efforts to create space-perceiving robots. If viewers were deemed to be capable of automatic surveying calculations, the discretionary scene recognition theory would lose its main justification. Clearly, it would be difficult for realists to maintain that we viewers rely on scene recognition for space perception in spite of our ability to survey. And in reality, as I show, the surveyor theory does a much better job of describing the everyday space we viewers actually see--a space featuring stable, unambiguous relationships among scene elements, and a single horizon and vanishing point for (meter-scale) receding objects. In addition, I argue, the surveyor theory raises fewer philosophical difficulties, because it is more in harmony with our everyday concepts of material objects, human agency and the self.

  15. APMP Scale Comparison with Three Radiation Thermometers and Six Fixed-Point Blackbodies

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimizu, Y.; Ishii, J.

    2015-08-01

    New Asia Pacific Metrology Programme (APMP) comparisons of radiation thermometry standards, APMP TS-11, and -12, have recently been initiated. These new APMP comparisons cover the temperature range from to . Three radiation thermometers with central wavelengths of 1.6 , 0.9 , and 0.65 are the transfer devices for the radiation thermometer scale comparison conducted in the so-called star configuration. In parallel, a compact fixed-point blackbody furnace that houses six types of fixed-point cells of In, Sn, Zn, Al, Ag, and Cu is circulated, again in a star-type comparison, to substantiate fixed-point calibration capabilities. Twelve APMP national metrology institutes are taking part in this endeavor, in which the National Metrology Institute of Japan acts as the pilot. In this article, the comparison scheme is described with emphasis on the features of the transfer devices, i.e., the radiation thermometers and the fixed-point blackbodies. Results of preliminary evaluations of the performance and characteristic of these instruments as well as the evaluation method of the comparison results are presented.

  16. Comparison of Posterior Approach With Intramedullary Nailing Versus Lateral Transfibular Approach With Fixed-Angle Plating for Tibiotalocalcaneal Arthrodesis.

    PubMed

    Mulligan, Ryan P; Adams, Samuel B; Easley, Mark E; DeOrio, James K; Nunley, James A

    2017-12-01

    A variety of operative approaches and fixation techniques have been described for tibiotalocalcaneal (TTC) arthrodesis. The intramedullary (IM) nail and lateral, fixed-angle plating are commonly used because of ease of use and favorable biomechanical properties. A lateral, transfibular (LTF) approach allows for direct access to the tibiotalar and subtalar joints, but the posterior, Achilles tendon-splitting (PATS) approach offers a robust soft tissue envelope. The purpose of this study was to compare the results of TTC arthrodesis with either a PATS approach with IM nailing or LTF approach with fixed-angle plating. A retrospective review was performed on all patients who underwent simultaneous TTC arthrodesis with minimum 1 year clinical and radiographic follow up. Patients were excluded if they underwent TTC arthrodesis through an approach other than PATS or LTF, and received fixation without an IM nail or fixed-angle plate. Primary outcomes examined were union rate, revisions, and complications. Thirty-eight patients underwent TTC arthrodesis with a PATS approach and IM nailing, and 28 with a LTF approach and lateral plating. The overall union rate was 71%; 76% (29 of 38 patients) for the PATS/IM nail group, and 64% (18 of 28) for LTF/plating group ( P = .41). Symptomatic nonunion requiring revision arthrodesis occurred in 16% (6 of 38) of the PATS/IM nail group versus 7% (2 of 28) in the LTF/lateral plating group ( P = .45). There were no significant differences in individual tibiotalar or subtalar union rates, superficial wound problems, infection, symptomatic hardware, stress fractures, or nerve irritations. Union, revision, and complication rates were similar for TTC arthrodesis performed with a PATS approach and IM nail compared with an LTF approach and fixed-angle plate in a complex patient population. Both techniques were adequate, especially when prior incisions, preexisting hardware, or deformity preclude options. Level III, retrospective comparative study.

  17. Within-subject comparisons of implant-supported mandibular prostheses: psychometric evaluation.

    PubMed

    de Grandmont, P; Feine, J S; Taché, R; Boudrias, P; Donohue, W B; Tanguay, R; Lund, J P

    1994-05-01

    In a within-subject cross-over clinical trial, psychometric and functional measurements were taken while 15 completely edentulous subjects wore mandibular fixed prostheses and long-bar removable implant-supported prostheses. In this paper, the results of a psychometric assessment are presented. Eight subjects first received the fixed bridge and seven the removable type. After having worn a prosthesis for a minimum of two months, subjects responded to psychometric scales that measured their perceptions of various factors associated with prostheses. They also chewed test foods while masticatory activity was recorded. The prostheses were then changed and the procedures repeated. At the end of the study, patients were asked to choose the prosthesis that they wished to keep. Patients assigned significantly higher scores, on visual analogue scales, to both types of implant-supported prostheses than to their original conventional prostheses for all factors tested, including general satisfaction. However, no statistically significant differences between the two implant-supported prostheses were detected except for the difficulty of chewing carrot, apple, and sausage. For these foods, the fixed prostheses were rated higher. Subjects' responses to category scales were consistent with their responses to the visual analogue scales. These results suggest that, although patients find the fixed bridge to be significantly better for chewing harder foods, there is no difference in their general satisfaction with the two types of prostheses.

  18. Multiscale/multiresolution landslides susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru

    2014-05-01

    Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital elevation models based on topographical maps, 1:25,000 and 1:5,000), the lithology (from geological maps, 1:200,000), land cover and land use (from CLC 2006 to maps derived from orthorectified aerial images, 0.5 meters resolution), rainfall (from Worldclim, ECAD to our own data), the seismicity (the seismic zonation of Romania) etc. The landslide inventory was created as polygonal data based on aerial images (resolution 0.5 meters), the information being considered at county level (NUTS 3) and, eventually, at communal level (LAU2). The methodological framework is based on the logistic regression as a quantitative method and the analytic hierarchy process as a semi-qualitative methods, both being applied once identically for all scales and once recalibrated for each scale and resolution (from 1:1,000,000 and one km pixel resolution to 1:25,000 and ten meters resolution). The predictive performance of the two models was assessed using the ROC (Receiver Operating Characteristic) curve and the AUC (Area Under Curve) parameter and the results indicate a good correspondence between the susceptibility estimated for the test samples (0.855-0.890) and for the validation samples (0.830-0.865). Finally, the results were compared in pairs in order to fix the errors at small scale and low resolution and to optimize the methodology for landslide susceptibility mapping on large areas.

  19. Setting the renormalization scale in pQCD: Comparisons of the principle of maximum conformality with the sequential extended Brodsky-Lepage-Mackenzie approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Hong -Hao; Wu, Xing -Gang; Ma, Yang

    A key problem in making precise perturbative QCD (pQCD) predictions is how to set the renormalization scale of the running coupling unambiguously at each finite order. The elimination of the uncertainty in setting the renormalization scale in pQCD will greatly increase the precision of collider tests of the Standard Model and the sensitivity to new phenomena. Renormalization group invariance requires that predictions for observables must also be independent on the choice of the renormalization scheme. The well-known Brodsky-Lepage-Mackenzie (BLM) approach cannot be easily extended beyond next-to-next-to-leading order of pQCD. Several suggestions have been proposed to extend the BLM approach tomore » all orders. In this paper we discuss two distinct methods. One is based on the “Principle of Maximum Conformality” (PMC), which provides a systematic all-orders method to eliminate the scale and scheme ambiguities of pQCD. The PMC extends the BLM procedure to all orders using renormalization group methods; as an outcome, it significantly improves the pQCD convergence by eliminating renormalon divergences. An alternative method is the “sequential extended BLM” (seBLM) approach, which has been primarily designed to improve the convergence of pQCD series. The seBLM, as originally proposed, introduces auxiliary fields and follows the pattern of the β0-expansion to fix the renormalization scale. However, the seBLM requires a recomputation of pQCD amplitudes including the auxiliary fields; due to the limited availability of calculations using these auxiliary fields, the seBLM has only been applied to a few processes at low orders. In order to avoid the complications of adding extra fields, we propose a modified version of seBLM which allows us to apply this method to higher orders. As a result, we then perform detailed numerical comparisons of the two alternative scale-setting approaches by investigating their predictions for the annihilation cross section ratio R e+e– at four-loop order in pQCD.« less

  20. Knot probability of polygons subjected to a force: a Monte Carlo study

    NASA Astrophysics Data System (ADS)

    Janse van Rensburg, E. J.; Orlandini, E.; Tesi, M. C.; Whittington, S. G.

    2008-01-01

    We use Monte Carlo methods to study the knot probability of lattice polygons on the cubic lattice in the presence of an external force f. The force is coupled to the span of the polygons along a lattice direction, say the z-direction. If the force is negative polygons are squeezed (the compressive regime), while positive forces tend to stretch the polygons along the z-direction (the tensile regime). For sufficiently large positive forces we verify that the Pincus scaling law in the force-extension curve holds. At a fixed number of edges n the knot probability is a decreasing function of the force. For a fixed force the knot probability approaches unity as 1 - exp(-α0(f)n + o(n)), where α0(f) is positive and a decreasing function of f. We also examine the average of the absolute value of the writhe and we verify the square root growth law (known for f = 0) for all values of f.

  1. Preemptive spatial competition under a reproduction-mortality constraint.

    PubMed

    Allstadt, Andrew; Caraco, Thomas; Korniss, G

    2009-06-21

    Spatially structured ecological interactions can shape selection pressures experienced by a population's different phenotypes. We study spatial competition between phenotypes subject to antagonistic pleiotropy between reproductive effort and mortality rate. The constraint we invoke reflects a previous life-history analysis; the implied dependence indicates that although propagation and mortality rates both vary, their ratio is fixed. We develop a stochastic invasion approximation predicting that phenotypes with higher propagation rates will invade an empty environment (no biotic resistance) faster, despite their higher mortality rate. However, once population density approaches demographic equilibrium, phenotypes with lower mortality are favored, despite their lower propagation rate. We conducted a set of pairwise invasion analyses by simulating an individual-based model of preemptive competition. In each case, the phenotype with the lowest mortality rate and (via antagonistic pleiotropy) the lowest propagation rate qualified as evolutionarily stable among strategies simulated. This result, for a fixed propagation to mortality ratio, suggests that a selective response to spatial competition can extend the time scale of the population's dynamics, which in turn decelerates phenotypic evolution.

  2. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  3. Cellular convection in a chamber with a warm surface raft

    NASA Astrophysics Data System (ADS)

    Whitehead, J. A.; Shea, Erin; Behn, Mark D.

    2011-10-01

    We calculate velocity and temperature fields for Rayleigh-Benard convection in a chamber with a warm raft that floats along the top surface for Rayleigh number up to Ra = 20 000. Two-dimensional, infinite Prandtl number, Boussinesq approximation equations are numerically advanced in time from a motionless state in a chamber of length L' and depth D'. We consider cases with an insulated raft and a raft of fixed temperature. Either oscillatory or stationary flow exists. In the case with an insulated raft over a fluid, there are only three parameters that govern the system: Rayleigh number (Ra), scaled chamber length (L = L'/D'), and scaled raft width (W). For W = 0 and L = 1, linear theory shows that the marginal state without a raft is at a Rayleigh number of 23π4=779.3, but we find that for the smallest W (determined by numerical grid size) the raft approaches the center monotonically in time for Ra<790. For 790871. For larger raft widths, there is a range of W that produces raft oscillation at each Ra up to 20 000. Rafts in longer cavities (L = 2 and 4) have almost no oscillatory behavior. With a raft of temperature set to different values of Tr rather than insulating, a fixed Rayleigh number Ra =20000, a square chamber (L = 1), fixed raft width, and with internal heat generation, there are two ranges of oscillating flow.

  4. Wind-Tunnel Evaluation of the Effect of Blade Nonstructural Mass Distribution on Helicopter Fixed-System Loads

    NASA Technical Reports Server (NTRS)

    Wilbur, Matthew L.; Yeager, William T., Jr.; Singleton, Jeffrey D.; Mirick, Paul H.; Wilkie, W. Keats

    1998-01-01

    This report provides data obtained during a wind-tunnel test conducted to investigate parametrically the effect of blade nonstructural mass on helicopter fixed-system vibratory loads. The data were obtained with aeroelastically scaled model rotor blades that allowed for the addition of concentrated nonstructural masses at multiple locations along the blade radius. Testing was conducted for advance ratios ranging from 0.10 to 0.35 for 10 blade-mass configurations. Three thrust levels were obtained at representative full-scale shaft angles for each blade-mass configuration. This report provides the fixed-system forces and moments measured during testing. The comprehensive database obtained is well-suited for use in correlation and development of advanced rotorcraft analyses.

  5. Decentralized control of large-scale systems: Fixed modes, sensitivity and parametric robustness. Ph.D. Thesis - Universite Paul Sabatier, 1985

    NASA Technical Reports Server (NTRS)

    Tarras, A.

    1987-01-01

    The problem of stabilization/pole placement under structural constraints of large scale linear systems is discussed. The existence of a solution to this problem is expressed in terms of fixed modes. The aim is to provide a bibliographic survey of the available results concerning the fixed modes (characterization, elimination, control structure selection to avoid them, control design in their absence) and to present the author's contribution to this problem which can be summarized by the use of the mode sensitivity concept to detect or to avoid them, the use of vibrational control to stabilize them, and the addition of parametric robustness considerations to design an optimal decentralized robust control.

  6. Fixism and conservation science.

    PubMed

    Robert, Alexandre; Fontaine, Colin; Veron, Simon; Monnet, Anne-Christine; Legrand, Marine; Clavel, Joanne; Chantepie, Stéphane; Couvet, Denis; Ducarme, Frédéric; Fontaine, Benoît; Jiguet, Frédéric; le Viol, Isabelle; Rolland, Jonathan; Sarrazin, François; Teplitsky, Céline; Mouchet, Maud

    2017-08-01

    The field of biodiversity conservation has recently been criticized as relying on a fixist view of the living world in which existing species constitute at the same time targets of conservation efforts and static states of reference, which is in apparent disagreement with evolutionary dynamics. We reviewed the prominent role of species as conservation units and the common benchmark approach to conservation that aims to use past biodiversity as a reference to conserve current biodiversity. We found that the species approach is justified by the discrepancy between the time scales of macroevolution and human influence and that biodiversity benchmarks are based on reference processes rather than fixed reference states. Overall, we argue that the ethical and theoretical frameworks underlying conservation research are based on macroevolutionary processes, such as extinction dynamics. Current species, phylogenetic, community, and functional conservation approaches constitute short-term responses to short-term human effects on these reference processes, and these approaches are consistent with evolutionary principles. © 2016 Society for Conservation Biology.

  7. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    PubMed

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  8. DNA Sequences from Formalin-Fixed Nematodes: Integrating Molecular and Morphological Approaches to Taxonomy

    PubMed Central

    Thomas, W. Kelley; Vida, J. T.; Frisse, Linda M.; Mundo, Manuel; Baldwin, James G.

    1997-01-01

    To effectively integrate DNA sequence analysis and classical nematode taxonomy, we must be able to obtain DNA sequences from formalin-fixed specimens. Microdissected sections of nematodes were removed from specimens fixed in formalin, using standard protocols and without destroying morphological features. The fixed sections provided sufficient template for multiple polymerase chain reaction-based DNA sequence analyses. PMID:19274156

  9. Time-dependent real space RG on the spin-1/2 XXZ chain

    NASA Astrophysics Data System (ADS)

    Mason, Peter; Zagoskin, Alexandre; Betouras, Joseph

    In order to measure the spread of information in a system of interacting fermions with nearest-neighbour couplings and strong bond disorder, one could utilise a dynamical real space renormalisation group (RG) approach on the spin-1/2 XXZ chain. Under such a procedure, a many-body localised state is established as an infinite randomness fixed point and the entropy scales with time as log(log(t)). One interesting further question that results from such a study is the case when the Hamiltonian explicitly depends on time. Here we answer this question by considering a dynamical renormalisation group treatment on the strongly disordered random spin-1/2 XXZ chain where the couplings are time-dependent and chosen to reflect a (slow) evolution of the governing Hamiltonian. Under the condition that the renormalisation process occurs at fixed time, a set of coupled second order, nonlinear PDE's can be written down in terms of the random distributions of the bonds and fields. Solution of these flow equations at the relevant critical fixed points leads us to establish the dynamics of the flow as we sweep through the quantum critical point of the Hamiltonian. We will present these critical flows as well as discussing the issues of duality, entropy and many-body localisation.

  10. A 1/10 Scale Model Test of a Fixed Chute Mixer-Ejector Nozzle in Unsuppressed Model. Part 1; Test Overview

    NASA Technical Reports Server (NTRS)

    Wolter, John D.

    2007-01-01

    This paper discusses a test of a nozzle concept for a high-speed commercial aircraft. While a great deal of effort has been expended to und erstand the noise-suppressed, take-off performance of mixer-ejector n ozzles, little has been done to assess their performance in unsuppressed mode at other flight conditions. To address this, a 1/10th scale m odel mixer-ejector nozzle in unsuppressed mode was tested at conditio ns representing transonic acceleration, supersonic cruise, subsonic cruise, and approach. Various configurations were tested to understand the effects of acoustic liners and several geometric parameters, such as throat area, expansion ratio, and nozzle length on nozzle performance. Thrust, flow, and internal pressures were measured. A statistica l model of the peak thrust coefficient results is presented and discussed.

  11. Optimal output fast feedback in two-time scale control of flexible arms

    NASA Technical Reports Server (NTRS)

    Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.

    1986-01-01

    Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.

  12. Rubbertown NGEM Demonstration Project Planning meetings ...

    EPA Pesticide Factsheets

    From the shared perspective of industrial facilities, workers, regulators, and communities, cost-effective detection and assessment of significant onset fugitive leaks or process issues, is a mutually beneficial concept. If emissions that require mitigation can be detected and fixed quickly, benefits such as safer working environments, cost saving through reduced product loss, lower air shed pollutant impacts, and improved transparency and community relations can be realized. Under its next generation emission measurement program (NGEM), EPA’s Office of Research and Development (ORD), National Risk Management Research Laboratory (NRMRL) is working collaboratively with industry, instrument /information companies, state and local agencies, communities, and academic groups to explore new technical approaches for non-point source detection and migration. Techniques such as mobile and fixed point sensors and passive samplers employed on various spatial scales are being explored. With collaboration of the project team including EPA R4, the Louisville Metro Air Pollution Control District (LMAPCD), industrial facilities, and contractors to the EPA, a select subset of these NGEM approaches will be demonstrated in this project as per the quality assurance project plan. From April 17-20, 2017, E. Thoma will travel to Louisville KY to work with the Louisville Metro Air Pollution Control District (LMAPCD) and other parties for planning activities related to the

  13. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  14. Multifractality of stock markets based on cumulative distribution function and multiscale multifractal analysis

    NASA Astrophysics Data System (ADS)

    Lin, Aijing; Shang, Pengjian

    2016-04-01

    Considering the diverse application of multifractal techniques in natural scientific disciplines, this work underscores the versatility of multiscale multifractal detrended fluctuation analysis (MMA) method to investigate artificial and real-world data sets. The modified MMA method based on cumulative distribution function is proposed with the objective of quantifying the scaling exponent and multifractality of nonstationary time series. It is demonstrated that our approach can provide a more stable and faithful description of multifractal properties in comprehensive range rather than fixing the window length and slide length. Our analyzes based on CDF-MMA method reveal significant differences in the multifractal characteristics in the temporal dynamics between US and Chinese stock markets, suggesting that these two stock markets might be regulated by very different mechanism. The CDF-MMA method is important for evidencing the stable and fine structure of multiscale and multifractal scaling behaviors and can be useful to deepen and broaden our understanding of scaling exponents and multifractal characteristics.

  15. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  16. Quantum gravity fluctuations flatten the Planck-scale Higgs potential

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Hamada, Yuta; Lumma, Johannes; Yamada, Masatoshi

    2018-04-01

    We investigate asymptotic safety of a toy model of a singlet-scalar extension of the Higgs sector including two real scalar fields under the impact of quantum-gravity fluctuations. Employing functional renormalization group techniques, we search for fixed points of the system which provide a tentative ultraviolet completion of the system. We find that in a particular regime of the gravitational parameter space the canonically marginal and relevant couplings in the scalar sector—including the mass parameters—become irrelevant at the ultraviolet fixed point. The infrared potential for the two scalars that can be reached from that fixed point is fully predicted and features no free parameters. In the remainder of the gravitational parameter space, the values of the quartic couplings in our model are predicted. In light of these results, we discuss whether the singlet-scalar could be a dark-matter candidate. Furthermore, we highlight how "classical scale invariance" in the sense of a flat potential of the scalar sector at the Planck scale could arise as a consequence of asymptotic safety.

  17. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  18. Implant Supported Fixed Restorations versus Implant Supported Removable Overdentures: A Systematic Review

    PubMed Central

    Selim, Khaled; Ali, Sherif; Reda, Ahmed

    2016-01-01

    AIM: The aim of this study is to systematically evaluate and compare implant retained fixed restoration versus implant retained over denture. MATERIAL AND METHODS: Search was made in 2 databases including PubMed and PubMed Central. Title and abstract were screened to select studies comparing implant retained fixed restorations versus implant retained removable overdentures. Articles which did not follow the inclusion criteria were excluded. Included papers were then read carefully for a second stage filter, this was followed by manual searching of bibliography of selected articles. RESULTS: The search resulted in 5 included papers. One study evaluated the masticatory function, while the other 4 evaluated the patient satisfaction. Two of them used Visual Analogue Scale (VAS) as a measurement tool, while the other two used VAS and Categorical Scales (CAT). Stability, ability to chew, ability to clean, ability to speak and esthetics were the main outcomes of the 4 included papers. CONCLUSION: Conflicting results was observed between the fixed and removable restorations. PMID:28028423

  19. Adaptive Framework for Classification and Novel Class Detection over Evolving Data Streams with Limited Labeled Data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haque, Ahsanul; Khan, Latifur; Baron, Michael

    2015-09-01

    Most approaches to classifying evolving data streams either divide the stream of data into fixed-size chunks or use gradual forgetting to address the problems of infinite length and concept drift. Finding the fixed size of the chunks or choosing a forgetting rate without prior knowledge about time-scale of change is not a trivial task. As a result, these approaches suffer from a trade-off between performance and sensitivity. To address this problem, we present a framework which uses change detection techniques on the classifier performance to determine chunk boundaries dynamically. Though this framework exhibits good performance, it is heavily dependent onmore » the availability of true labels of data instances. However, labeled data instances are scarce in realistic settings and not readily available. Therefore, we present a second framework which is unsupervised in nature, and exploits change detection on classifier confidence values to determine chunk boundaries dynamically. In this way, it avoids the use of labeled data while still addressing the problems of infinite length and concept drift. Moreover, both of our proposed frameworks address the concept evolution problem by detecting outliers having similar values for the attributes. We provide theoretical proof that our change detection method works better than other state-of-the-art approaches in this particular scenario. Results from experiments on various benchmark and synthetic data sets also show the efficiency of our proposed frameworks.« less

  20. Teaching an Old Dog an Old Trick: FREE-FIX and Free-Boundary Axisymmetric MHD Equilibrium

    NASA Astrophysics Data System (ADS)

    Guazzotto, Luca

    2015-11-01

    A common task in plasma physics research is the calculation of an axisymmetric equilibrium for tokamak modeling. The main unknown of the problem is the magnetic poloidal flux ψ. The easiest approach is to assign the shape of the plasma and only solve the equilibrium problem in the plasma / closed-field-lines region (the ``fixed-boundary approach''). Often, one may also need the vacuum fields, i.e. the equilibrium in the open-field-lines region, requiring either coil currents or ψ on some closed curve outside the plasma to be assigned (the ``free-boundary approach''). Going from one approach to the other is a textbook problem, involving the calculation of Green's functions and surface integrals in the plasma. However, no tools are readily available to perform this task. Here we present a code (FREE-FIX) to compute a boundary condition for a free-boundary equilibrium given only the corresponding fixed-boundary equilibrium. An improvement to the standard solution method, allowing for much faster calculations, is presented. Applications are discussed. PPPL fund 245139 and DOE grant G00009102.

  1. Performance analysis of PPP ambiguity resolution with UPD products estimated from different scales of reference station networks

    NASA Astrophysics Data System (ADS)

    Wang, Siyao; Li, Bofeng; Li, Xingxing; Zang, Nan

    2018-01-01

    Integer ambiguity fixing with uncalibrated phase delay (UPD) products can significantly shorten the initialization time and improve the accuracy of precise point positioning (PPP). Since the tracking arcs of satellites and the behavior of atmospheric biases can be very different for the reference networks with different scales, the qualities of corresponding UPD products may be also various. The purpose of this paper is to comparatively investigate the influence of different scales of reference station networks on UPD estimation and user ambiguity resolution. Three reference station networks with global, wide-area and local scales are used to compute the UPD products and analyze their impact on the PPP-AR. The time-to-first-fix, the unfix rate and the incorrect fix rate of PPP-AR are analyzed. Moreover, in order to further shorten the convergence time for obtaining precise positioning, a modified partial ambiguity resolution (PAR) and corresponding validation strategy are presented. In this PAR method, the ambiguity subset is determined by removing the ambiguity one by one in the order of ascending elevations. Besides, for static positioning mode, a coordinate validation strategy is employed to enhance the reliability of the fixed coordinate. The experiment results show that UPD products computed by smaller station network are more accurate and lead to a better coordinate solution; the PAR method used in this paper can shorten the convergence time and the coordinate validation strategy can improve the availability of high precision positioning.

  2. Safe system approach to reducing serious injury risk in motorcyclist collisions with fixed hazards.

    PubMed

    Bambach, M R; Mitchell, R J

    2015-01-01

    Collisions with fixed objects in the roadway environment account for a substantial proportion of motorcyclist fatalities. Many studies have identified individual roadway environment and/or motorcyclist characteristics that are associated with the severity of the injury outcome, including the presence of roadside barriers, helmet use, alcohol use and speeding. However, no studies have reported the cumulative benefit of such characteristics on motorcycling safety. The safe system approach recognises that the system must work as a whole to reduce the net injury risk to road users to an acceptable level, including the four system cornerstone areas of roadways, speeds, vehicles and people. The aim of the present paper is to consider these cornerstone areas concomitantly, and quantitatively assess the serious injury risk of motorcyclists in fixed object collisions using this holistic approach. A total of 1006 Australian and 15,727 (weighted) United States motorcyclist-fixed object collisions were collected retrospectively, and the serious injury risks associated with roadside barriers, helmet use, alcohol use and speeding were assessed both individually and concomitantly. The results indicate that if safety efforts are made in each of the safe system cornerstone areas, the combined effect is to substantially reduce the serious injury risk of fixed hazards to motorcyclists. The holistic approach is shown to reduce the serious injury risk considerably more than each of the safety efforts considered individually. These results promote the use of a safe system approach to motorcycling safety. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Fixed Base Modal Survey of the MPCV Orion European Service Module Structural Test Article

    NASA Technical Reports Server (NTRS)

    Winkel, James P.; Akers, J. C.; Suarez, Vicente J.; Staab, Lucas D.; Napolitano, Kevin L.

    2017-01-01

    Recently, the MPCV Orion European Service Module Structural Test Article (E-STA) underwent sine vibration testing using the multi-axis shaker system at NASA GRC Plum Brook Station Mechanical Vibration Facility (MVF). An innovative approach using measured constraint shapes at the interface of E-STA to the MVF allowed high-quality fixed base modal parameters of the E-STA to be extracted, which have been used to update the E-STA finite element model (FEM), without the need for a traditional fixed base modal survey. This innovative approach provided considerable program cost and test schedule savings. This paper documents this modal survey, which includes the modal pretest analysis sensor selection, the fixed base methodology using measured constraint shapes as virtual references and measured frequency response functions, and post-survey comparison between measured and analysis fixed base modal parameters.

  4. Low speed tests of a fixed geometry inlet for a tilt nacelle V/STOL airplane

    NASA Technical Reports Server (NTRS)

    Syberg, J.; Koncsek, J. L.

    1977-01-01

    Test data were obtained with a 1/4 scale cold flow model of the inlet at freestream velocities from 0 to 77 m/s (150 knots) and angles of attack from 45 deg to 120 deg. A large scale model was tested with a high bypass ratio turbofan in the NASA/ARC wind tunnel. A fixed geometry inlet is a viable concept for a tilt nacelle V/STOL application. Comparison of data obtained with the two models indicates that flow separation at high angles of attack and low airflow rates is strongly sensitive to Reynolds number and that the large scale model has a significantly improved range of separation-free operation.

  5. Dissipative preparation of entanglement in optical cavities.

    PubMed

    Kastoryano, M J; Reiter, F; Sørensen, A S

    2011-03-04

    We propose a novel scheme for the preparation of a maximally entangled state of two atoms in an optical cavity. Starting from an arbitrary initial state, a singlet state is prepared as the unique fixed point of a dissipative quantum dynamical process. In our scheme, cavity decay is no longer undesirable, but plays an integral part in the dynamics. As a result, we get a qualitative improvement in the scaling of the fidelity with the cavity parameters. Our analysis indicates that dissipative state preparation is more than just a new conceptual approach, but can allow for significant improvement as compared to preparation protocols based on coherent unitary dynamics.

  6. Multioverlap Simulations of the 3D Edwards-Anderson Ising Spin Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berg, B.A.; Berg, B.A.; Janke, W.

    1998-05-01

    We introduce a novel method for numerical spin glass investigations: Simulations of two replica at fixed temperature, weighted to achieve a broad distribution of the Parisi overlap parameter q (multioverlap). We demonstrate the feasibility of the approach by studying the 3D Edwards-Anderson Ising (J{sub ik}={plus_minus}1) spin glass in the broken phase ({beta}=1). This makes it possible to obtain reliable results about spin glass tunneling barriers. In addition, our results indicate a nontrivial scaling behavior of the canonical q distributions not only at the freezing point but also deep in the broken phase. {copyright} {ital 1998} {ital The American Physical Society}

  7. Nuclear modification factor in an anisotropic quark-gluon plasma

    NASA Astrophysics Data System (ADS)

    Mandal, Mahatsab; Bhattacharya, Lusaka; Roy, Pradip

    2011-10-01

    We calculate the nuclear modification factor (RAA) of light hadrons by taking into account the initial state momentum anisotropy of the quark-gluon plasma (QGP) expected to be formed in relativistic heavy ion collisions. Such an anisotropy can result from the initial rapid longitudinal expansion of the matter. A phenomenological model for the space-time evolution of the anisotropic QGP is used to obtain the time dependence of the anisotropy parameter ξ and the hard momentum scale, phard. The result is then compared with the PHENIX experimental data to constrain the isotropization time scale, τiso for fixed initial conditions (FIC). It is shown that the extracted value of τiso lies in the range 0.5⩽τiso⩽1.5. However, using a fixed final multiplicity (FFM) condition does not lead to any firm conclusion about the extraction of the isotropization time. The present calculation is also extended to contrast with the recent measurement of nuclear modification factor by the ALICE collaboration at s=2.76 TeV. It is argued that in the present approach, the extraction of τiso at this energy is uncertain and, therefore, refinement of the model is necessary. The sensitivity of the results on the initial conditions has been discussed. We also present the nuclear modification factor at Large Hadron Collider (LHC) energies with s=5.5 TeV.

  8. Error Cost Escalation Through the Project Life Cycle

    NASA Technical Reports Server (NTRS)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  9. Robotic platform for parallelized cultivation and monitoring of microbial growth parameters in microwell plates.

    PubMed

    Knepper, Andreas; Heiser, Michael; Glauche, Florian; Neubauer, Peter

    2014-12-01

    The enormous variation possibilities of bioprocesses challenge process development to fix a commercial process with respect to costs and time. Although some cultivation systems and some devices for unit operations combine the latest technology on miniaturization, parallelization, and sensing, the degree of automation in upstream and downstream bioprocess development is still limited to single steps. We aim to face this challenge by an interdisciplinary approach to significantly shorten development times and costs. As a first step, we scaled down analytical assays to the microliter scale and created automated procedures for starting the cultivation and monitoring the optical density (OD), pH, concentrations of glucose and acetate in the culture medium, and product formation in fed-batch cultures in the 96-well format. Then, the separate measurements of pH, OD, and concentrations of acetate and glucose were combined to one method. This method enables automated process monitoring at dedicated intervals (e.g., also during the night). By this approach, we managed to increase the information content of cultivations in 96-microwell plates, thus turning them into a suitable tool for high-throughput bioprocess development. Here, we present the flowcharts as well as cultivation data of our automation approach. © 2014 Society for Laboratory Automation and Screening.

  10. The Neandertal genome and ancient DNA authenticity

    PubMed Central

    Green, Richard E; Briggs, Adrian W; Krause, Johannes; Prüfer, Kay; Burbano, Hernán A; Siebauer, Michael; Lachmann, Michael; Pääbo, Svante

    2009-01-01

    Recent advances in high-thoughput DNA sequencing have made genome-scale analyses of genomes of extinct organisms possible. With these new opportunities come new difficulties in assessing the authenticity of the DNA sequences retrieved. We discuss how these difficulties can be addressed, particularly with regard to analyses of the Neandertal genome. We argue that only direct assays of DNA sequence positions in which Neandertals differ from all contemporary humans can serve as a reliable means to estimate human contamination. Indirect measures, such as the extent of DNA fragmentation, nucleotide misincorporations, or comparison of derived allele frequencies in different fragment size classes, are unreliable. Fortunately, interim approaches based on mtDNA differences between Neandertals and current humans, detection of male contamination through Y chromosomal sequences, and repeated sequencing from the same fossil to detect autosomal contamination allow initial large-scale sequencing of Neandertal genomes. This will result in the discovery of fixed differences in the nuclear genome between Neandertals and current humans that can serve as future direct assays for contamination. For analyses of other fossil hominins, which may become possible in the future, we suggest a similar ‘boot-strap' approach in which interim approaches are applied until sufficient data for more definitive direct assays are acquired. PMID:19661919

  11. No difference between fixed- and mobile-bearing total knee arthroplasty in activities of daily living and pain: a randomized clinical trial.

    PubMed

    Amaro, Joicemar Tarouco; Arliani, Gustavo Gonçalves; Astur, Diego Costa; Debieux, Pedro; Kaleka, Camila Cohen; Cohen, Moises

    2017-06-01

    Until now, there are no definitive conclusions regarding functional differences related to middle- and long-term everyday activities and patient pain following implantation of mobile- and fixed-platform tibial prostheses. The aim of this study was to determine whether there are middle-term differences in knee function and pain in patients undergoing fixed- and mobile-bearing total knee arthroplasty (TKA). Eligible patients were randomized into two groups: the first group received TKA implantation with a fixed tibial platform (group A); the second group received TKA with a mobile tibial platform (group B). Patients were followed up (2 years), and their symptoms and limitations in daily living activities were evaluated using the Knee Outcome Survey-Activities of Daily Living Scale (ADLS), in addition to pain evaluation assessed using the pain visual analogue scale (VAS). There were no significant differences in function and symptoms in the ADLS and VAS between the study groups. The type of platform used in TKA (fixed vs. mobile) does not change the symptoms, function or pain of patients 2 years post-surgery. Although mobile TKAs may have better short-term results, at medium- and long-term follow-up they do not present important clinical differences compared with fixed-platform TKAs. This information is important so that surgeons can choose the most suitable implant for each patient. Randomized clinical trial, Level I.

  12. [How to Increase the Effectiveness of Antihypertensive Therapy in Clinical Practice: Results of the Russian Observational Program FORSAZH].

    PubMed

    Glezer, M G; Deev On Behalf Of The Participants Of The Program, A D

    2016-01-01

    im of the study - to evaluate the possibility of increasing the effectiveness of antihypertensive therapy by simplifying regimens, improving knowledge and practical skills of the doctors on the use of modern tactical approaches to treatment as well as patients education methods of measuring blood pressure (BP), the principles of a healthy lifestyle and explain the need to follow the prescribing physician. Post-marketing observational discovery program FORSAZH held in 29 cities of the Russian Federation. Participation in the program received 442 physician (internists and general practitioners), which included 1969 patients with prior failure of combination antihypertensive therapy. Patients in 86% of cases took the free combination, 14% - fixed combinations of drugs. The change of the treatment on reception of a preparation containing a fixed combination of perindopril/indapamide (10 mg/2.5 mg) after 3 months led to decrease in systolic blood pressure by an average of 39.5 mm Hg, diastolic - 18.7 per mm Hg. The frequency of achieving the target BP <140 mm Hg and 90 it was 76%. Marked reduction in BP and frequency to achieve the target BP is not dependent on additional training of physicians and patients, the use of prior therapy in free or fixed combination, but depended on the initial degree of increase in BP and duration of therapy. Predictors of failure to achieve target BP were age, male gender, low initial adherence, good health, a higher baseline BP, elevated cholesterol levels, body weight, heart rate and decreased glomerular filtration rate. Adherence to therapy patients (on a scale of Morisky-Green) and health assessment on a visual analog scale significantly increased. This tactic has been a change of therapy is not only effective but also safe. Adverse events were reported in 28 patients (1.4% of the total number of observed cases) and only 1 case required dose reduction due to development of clinically manifested hypotension. In enhancing the effectiveness of the treatment of patients with hypertension was decisive simplification of drug therapy through the use of a fixed combination of perindopril A/indapamide.

  13. Extensively Parameterized Mutation-Selection Models Reliably Capture Site-Specific Selective Constraint.

    PubMed

    Spielman, Stephanie J; Wilke, Claus O

    2016-11-01

    The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  15. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE PAGES

    Dytrych, T.; Maris, P.; Launey, K. D.; ...

    2016-06-22

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU3-selected subspaces. We demonstrate LSU3shell’s strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and significant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis affords memory savings in calculations of states withmore » a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  16. Representation matters: quantitative behavioral variation in wild worm strains

    NASA Astrophysics Data System (ADS)

    Brown, Andre

    Natural genetic variation in populations is the basis of genome-wide association studies, an approach that has been applied in large studies of humans to study the genetic architecture of complex traits including disease risk. Of course, the traits you choose to measure determine which associated genes you discover (or miss). In large-scale human studies, the measured traits are usually taken as a given during the association step because they are expensive to collect and standardize. Working with the nematode worm C. elegans, we do not have the same constraints. In this talk I will describe how large-scale imaging of worm behavior allows us to develop alternative representations of behavior that vary differently across wild populations. The alternative representations yield novel traits that can be used for genome-wide association studies and may reveal basic properties of the genotype-phenotype map that are obscured if only a small set of fixed traits are used.

  17. Preparation of biomimetic nano-structured films with multi-scale roughness

    NASA Astrophysics Data System (ADS)

    Shelemin, A.; Nikitin, D.; Choukourov, A.; Kylián, O.; Kousal, J.; Khalakhan, I.; Melnichuk, I.; Slavínská, D.; Biederman, H.

    2016-06-01

    Biomimetic nano-structured films are valuable materials in various applications. In this study we introduce a fully vacuum-based approach for fabrication of such films. The method combines deposition of nanoparticles (NPs) by gas aggregation source and deposition of overcoat thin film that fixes the nanoparticles on a surface. This leads to the formation of nanorough surfaces which, depending on the chemical nature of the overcoat, may range from superhydrophilic to superhydrophobic. In addition, it is shown that by proper adjustment of the amount of NPs it is possible to tailor adhesive force on superhydrophobic surfaces. Finally, the possibility to produce NPs in a wide range of their size (45-240 nm in this study) makes it possible to produce surfaces not only with single scale roughness, but also with bi-modal or even multi-modal character. Such surfaces were found to be superhydrophobic with negligible water contact angle hysteresis and hence truly slippery.

  18. Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dytrych, T.; Maris, Pieter; Launey, K. D.

    2016-06-09

    We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for 6Li and 12C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations ofmore » states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.« less

  19. A managerial approach to allocating indirect fixed costs in health care organizations.

    PubMed

    Goldschmidt, Y; Gafni, A

    1990-01-01

    To allocate indirect fixed costs to the different units in an organization, fixed costs of a supporting service should be charged to the factor that creates the demand for the service (using the dual-rate-charging method) and overhead costs should be charged to the binding constraint of the organization.

  20. General Series Solutions for Stresses and Displacements in an Inner-fixed Ring

    NASA Astrophysics Data System (ADS)

    Jiao, Yongshu; Liu, Shuo; Qi, Dexuan

    2018-03-01

    The general series solution approach is provided to get the stress and displacement fields in the inner-fixed ring. After choosing an Airy stress function in series form, stresses are expressed by infinite coefficients. Displacements are obtained by integrating the geometric equations. For an inner-fixed ring, the arbitrary loads acting on outer edge are extended into two sets of Fourier series. The zero displacement boundary conditions on inner surface are utilized. Then the stress (and displacement) coefficients are expressed by loading coefficients. A numerical example shows the validity of this approach.

  1. The square lattice Ising model on the rectangle II: finite-size scaling limit

    NASA Astrophysics Data System (ADS)

    Hucht, Alfred

    2017-06-01

    Based on the results published recently (Hucht 2017 J. Phys. A: Math. Theor. 50 065201), the universal finite-size contributions to the free energy of the square lattice Ising model on the L× M rectangle, with open boundary conditions in both directions, are calculated exactly in the finite-size scaling limit L, M\\to∞ , T\\to Tc , with fixed temperature scaling variable x\\propto(T/Tc-1)M and fixed aspect ratio ρ\\propto L/M . We derive exponentially fast converging series for the related Casimir potential and Casimir force scaling functions. At the critical point T=Tc we confirm predictions from conformal field theory (Cardy and Peschel 1988 Nucl. Phys. B 300 377, Kleban and Vassileva 1991 J. Phys. A: Math. Gen. 24 3407). The presence of corners and the related corner free energy has dramatic impact on the Casimir scaling functions and leads to a logarithmic divergence of the Casimir potential scaling function at criticality.

  2. GIS interpolations of witness tree records (1839-1866) for northern Wisconsin at multiple scales

    USGS Publications Warehouse

    He, H.S.; Mladenoff, D.J.; Sickley, T.A.; Guntenspergen, G.R.

    2000-01-01

    To construct forest landscape of pre-European settlement periods, we developed a GIS interpolation approach to convert witness tree records of the U.S. General Land Office (GLO) survey from point to polygon data, which better described continuously distributed vegetation. The witness tree records (1839-1866) were processed for a 3-million ha landscape in northern Wisconsin, U.S.A. at different scales. We provided implications of processing results at each scale. Compared with traditional GLO mapping that has fixed mapping scales and generalized classifications, our approach allows presettlement forest landscapes to be analysed at the individual species level and reconstructed under various classifications. We calculated vegetation indices including relative density, dominance, and importance value for each species, and quantitatively described the possible outcomes when GLO records are analysed at three different scales (resolution). The 1 x 1-section resolution preserved spatial information but derived the most conservative estimates of species distributions measured in percentage area, which increased at coarser resolutions. Such increases under the 2 x 2-section resolution were in the order of three to four times for the least common species, two to three times for the medium to most common species, and one to two times for the most common or highly contagious species. We marred the distributions of hemlock and sugar maple from the pre-European settlement period based on their witness tree locations and reconstructed presettlement forest landscapes based on species importance values derived for all species. The results provide a unique basis to further study land cover changes occurring after European settlement.

  3. Aeroacoustic Study of a 26%-Scale Semispan Model of a Boeing 777 Wing in the NASA Ames 40- by 80-Foot Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Horne, W. Clifton; Burnside, Nathan J.; Soderman, Paul T.; Jaeger, Stephen M.; Reinero, Bryan R.; James, Kevin D.; Arledge, Thomas K.

    2004-01-01

    An acoustic and aerodynamic study was made of a 26%-scale unpowered Boeing 777 aircraft semispan model in the NASA Ames 40- by 80-Foot Wind Tunnel for the purpose of identifying and attenuating airframe noise sources. Simulated approach and landing configurations were evaluated at Mach numbers between 0.12 and 0.24. Cruise configurations were evaluated at Mach numbers between 0.24 and 0.33. The research team used two Ames phased-microphone arrays, a large fixed array and a small traversing array, mounted under the wing to locate and compare various noise sources in the wing high-lift system and landing gear. Numerous model modifications and noise alleviation devices were evaluated. Simultaneous with acoustic measurements, aerodynamic forces were recorded to document aircraft conditions and any performance changes caused by the geometric modifications. Numerous airframe noise sources were identified that might be important factors in the approach and landing noise of the full-scale aircraft. Several noise-control devices were applied to each noise source. The devices were chosen to manipulate and control, if possible, the flow around the various tips and through the various gaps of the high-lift system so as to minimize the noise generation. Fences, fairings, tip extensions, cove fillers, vortex generators, hole coverings, and boundary-layer trips were tested. In many cases, the noise-control devices eliminated noise from some sources at specific frequencies. When scaled to full-scale third-octave bands, typical noise reductions ranged from 1 to 10 dB without significant aerodynamic performance loss.

  4. Mass scale of vectorlike matter and superpartners from IR fixed point predictions of gauge and top Yukawa couplings

    NASA Astrophysics Data System (ADS)

    Dermíšek, Radovan; McGinnis, Navin

    2018-03-01

    We use the IR fixed point predictions for gauge couplings and the top Yukawa coupling in the minimal supersymmetric model (MSSM) extended with vectorlike families to infer the scale of vectorlike matter and superpartners. We quote results for several extensions of the MSSM and present results in detail for the MSSM extended with one complete vectorlike family. We find that for a unified gauge coupling αG>0.3 vectorlike matter or superpartners are expected within 1.7 TeV (2.5 TeV) based on all three gauge couplings being simultaneously within 1.5% (5%) from observed values. This range extends to about 4 TeV for αG>0.2 . We also find that in the scenario with two additional large Yukawa couplings of vectorlike quarks the IR fixed point value of the top Yukawa coupling independently points to a multi-TeV range for vectorlike matter and superpartners. Assuming a universal value for all large Yukawa couplings at the grand unified theory scale, the measured top quark mass can be obtained from the IR fixed point for tan β ≃4 . The range expands to any tan β >3 for significant departures from the universality assumption. Considering that the Higgs boson mass also points to a multi-TeV range for superpartners in the MSSM, adding a complete vectorlike family at the same scale provides a compelling scenario where the values of gauge couplings and the top quark mass are understood as a consequence of the particle content of the model.

  5. A simplified, data-constrained approach to estimate the permafrost carbon-climate feedback.

    PubMed

    Koven, C D; Schuur, E A G; Schädel, C; Bohn, T J; Burke, E J; Chen, G; Chen, X; Ciais, P; Grosse, G; Harden, J W; Hayes, D J; Hugelius, G; Jafarov, E E; Krinner, G; Kuhry, P; Lawrence, D M; MacDougall, A H; Marchenko, S S; McGuire, A D; Natali, S M; Nicolsky, D J; Olefeldt, D; Peng, S; Romanovsky, V E; Schaefer, K M; Strauss, J; Treat, C C; Turetsky, M

    2015-11-13

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation-Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2-33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9-112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of -14 to -19 Pg C °C(-1) on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10-18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming. © 2015 The Authors.

  6. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    PubMed Central

    Koven, C. D.; Schuur, E. A. G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J. W.; Hayes, D. J.; Hugelius, G.; Jafarov, E. E.; Krinner, G.; Kuhry, P.; Lawrence, D. M.; MacDougall, A. H.; Marchenko, S. S.; McGuire, A. D.; Natali, S. M.; Nicolsky, D. J.; Olefeldt, D.; Peng, S.; Romanovsky, V. E.; Schaefer, K. M.; Strauss, J.; Treat, C. C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming. PMID:26438276

  7. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    DOE PAGES

    Koven, C. D.; Schuur, E. A. G.; Schadel, C.; ...

    2015-10-05

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soilmore » temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of –14 to –19 Pg C °C–1 on a 100 year time scale. For CH 4 emissions, our approach assumes a fixed saturated area and that increases in CH 4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH 4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. In conclusion, the simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming.« less

  8. A simplified, data-constrained approach to estimate the permafrost carbon–climate feedback

    USGS Publications Warehouse

    Koven, C.D.; Schuur, E.A.G.; Schädel, C.; Bohn, T. J.; Burke, E. J.; Chen, G.; Chen, X.; Ciais, P.; Grosse, G.; Harden, J.W.; Hayes, D.J.; Hugelius, G.; Jafarov, Elchin E.; Krinner, G.; Kuhry, P.; Lawrence, D.M.; MacDougall, A. H.; Marchenko, Sergey S.; McGuire, A. David; Natali, Susan M.; Nicolsky, D.J.; Olefeldt, David; Peng, S.; Romanovsky, V.E.; Schaefer, Kevin M.; Strauss, J.; Treat, C.C.; Turetsky, M.

    2015-01-01

    We present an approach to estimate the feedback from large-scale thawing of permafrost soils using a simplified, data-constrained model that combines three elements: soil carbon (C) maps and profiles to identify the distribution and type of C in permafrost soils; incubation experiments to quantify the rates of C lost after thaw; and models of soil thermal dynamics in response to climate warming. We call the approach the Permafrost Carbon Network Incubation–Panarctic Thermal scaling approach (PInc-PanTher). The approach assumes that C stocks do not decompose at all when frozen, but once thawed follow set decomposition trajectories as a function of soil temperature. The trajectories are determined according to a three-pool decomposition model fitted to incubation data using parameters specific to soil horizon types. We calculate litterfall C inputs required to maintain steady-state C balance for the current climate, and hold those inputs constant. Soil temperatures are taken from the soil thermal modules of ecosystem model simulations forced by a common set of future climate change anomalies under two warming scenarios over the period 2010 to 2100. Under a medium warming scenario (RCP4.5), the approach projects permafrost soil C losses of 12.2–33.4 Pg C; under a high warming scenario (RCP8.5), the approach projects C losses of 27.9–112.6 Pg C. Projected C losses are roughly linearly proportional to global temperature changes across the two scenarios. These results indicate a global sensitivity of frozen soil C to climate change (γ sensitivity) of −14 to −19 Pg C °C−1 on a 100 year time scale. For CH4 emissions, our approach assumes a fixed saturated area and that increases in CH4 emissions are related to increased heterotrophic respiration in anoxic soil, yielding CH4 emission increases of 7% and 35% for the RCP4.5 and RCP8.5 scenarios, respectively, which add an additional greenhouse gas forcing of approximately 10–18%. The simplified approach presented here neglects many important processes that may amplify or mitigate C release from permafrost soils, but serves as a data-constrained estimate on the forced, large-scale permafrost C response to warming.

  9. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  10. Fragmentation functions beyond fixed order accuracy

    NASA Astrophysics Data System (ADS)

    Anderle, Daniele P.; Kaufmann, Tom; Stratmann, Marco; Ringer, Felix

    2017-03-01

    We give a detailed account of the phenomenology of all-order resummations of logarithmically enhanced contributions at small momentum fraction of the observed hadron in semi-inclusive electron-positron annihilation and the timelike scale evolution of parton-to-hadron fragmentation functions. The formalism to perform resummations in Mellin moment space is briefly reviewed, and all relevant expressions up to next-to-next-to-leading logarithmic order are derived, including their explicit dependence on the factorization and renormalization scales. We discuss the details pertinent to a proper numerical implementation of the resummed results comprising an iterative solution to the timelike evolution equations, the matching to known fixed-order expressions, and the choice of the contour in the Mellin inverse transformation. First extractions of parton-to-pion fragmentation functions from semi-inclusive annihilation data are performed at different logarithmic orders of the resummations in order to estimate their phenomenological relevance. To this end, we compare our results to corresponding fits up to fixed, next-to-next-to-leading order accuracy and study the residual dependence on the factorization scale in each case.

  11. NITROGEN EXPORT FROM FORESTED WATERSHEDS IN THE OREGON COAST RANGE: THE ROLE OF N2-FIXING RED ALDER

    EPA Science Inventory

    Variations in plant community composition across the landscape can influence nutrient retention and loss at the watershed scale. A striking example of plant species influence is the role of N2-fixing red alder (Alnus rubra) in the biogeochemistry of Pacific Northwest forests. T...

  12. Asymptotic safety of quantum gravity beyond Ricci scalars

    NASA Astrophysics Data System (ADS)

    Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph

    2018-04-01

    We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.

  13. Evaluation of Scaling Methods for Rotorcraft Icing

    NASA Technical Reports Server (NTRS)

    Tsao, Jen-Ching; Kreeger, Richard E.

    2010-01-01

    This paper reports result of an experimental study in the NASA Glenn Icing Research Tunnel (IRT) to evaluate how well the current recommended scaling methods developed for fixed-wing unprotected surface icing applications might apply to representative rotor blades at finite angle of attack. Unlike the fixed-wing case, there is no single scaling method that has been systematically developed and evaluated for rotorcraft icing applications. In the present study, scaling was based on the modified Ruff method with scale velocity determined by maintaining constant Weber number. Models were unswept NACA 0012 wing sections. The reference model had a chord of 91.4 cm and scale model had a chord of 35.6 cm. Reference tests were conducted with velocities of 76 and 100 kt (39 and 52 m/s), droplet MVDs of 150 and 195 fun, and with stagnation-point freezing fractions of 0.3 and 0.5 at angle of attack of 0deg and 5deg. It was shown that good ice shape scaling was achieved for NACA 0012 airfoils with angle of attack lip to 5deg.

  14. Outbreak statistics and scaling laws for externally driven epidemics.

    PubMed

    Singh, Sarabjeet; Myers, Christopher R

    2014-04-01

    Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.

  15. The Significance of Temperature Based Approach Over the Energy Based Approaches in the Buildings Thermal Assessment

    NASA Astrophysics Data System (ADS)

    Albatayneh, Aiman; Alterman, Dariusz; Page, Adrian; Moghtaderi, Behdad

    2017-05-01

    The design of low energy buildings requires accurate thermal simulation software to assess the heating and cooling loads. Such designs should sustain thermal comfort for occupants and promote less energy usage over the life time of any building. One of the house energy rating used in Australia is AccuRate, star rating tool to assess and compare the thermal performance of various buildings where the heating and cooling loads are calculated based on fixed operational temperatures between 20 °C to 25 °C to sustain thermal comfort for the occupants. However, these fixed settings for the time and temperatures considerably increase the heating and cooling loads. On the other hand the adaptive thermal model applies a broader range of weather conditions, interacts with the occupants and promotes low energy solutions to maintain thermal comfort. This can be achieved by natural ventilation (opening window/doors), suitable clothes, shading and low energy heating/cooling solutions for the occupied spaces (rooms). These activities will save significant amount of operating energy what can to be taken into account to predict energy consumption for a building. Most of the buildings thermal assessment tools depend on energy-based approaches to predict the thermal performance of any building e.g. AccuRate in Australia. This approach encourages the use of energy to maintain thermal comfort. This paper describes the advantages of a temperature-based approach to assess the building's thermal performance (using an adaptive thermal comfort model) over energy based approach (AccuRate Software used in Australia). The temperature-based approach was validated and compared with the energy-based approach using four full scale housing test modules located in Newcastle, Australia (Cavity Brick (CB), Insulated Cavity Brick (InsCB), Insulated Brick Veneer (InsBV) and Insulated Reverse Brick Veneer (InsRBV)) subjected to a range of seasonal conditions in a moderate climate. The time required for heating and/or cooling using the adaptive thermal comfort approach and AccuRate predictions were estimated. Significant savings (of about 50 %) in energy consumption in minimising the time required for heating and cooling were achieved by using the adaptive thermal comfort model.

  16. Solution of the effective Hamiltonian of impurity hopping between two sites in a metal

    NASA Astrophysics Data System (ADS)

    Ye, Jinwu

    1997-07-01

    We analyze in detail all the possible fixed points of the effective Hamiltonian of a nonmagnetic impurity hopping between two sites in a metal obtained by Moustakas and Fisher (MF). We find a line of non-Fermi liquid fixed points which continuously interpolates between the two-channel Kondo fixed point (2CK) and the one-channel, two-impurity Kondo (2IK) fixed point. There is one relevant direction with scaling dimension 12 and one leading irrelevant operator with dimension 32. There is also one marginal operator in the spin sector moving along this line. The marginal operator, combined with the leading irrelevant operator, will generate the relevant operator. For the general position on this line, the leading low-temperature exponents of the specific heat, the hopping susceptibility and the electron conductivity Cimp,χhimp,σ(T) are the same as those of the 2CK, but the finite-size spectrum depends on the position on the line. No universal ratios can be formed from the amplitudes of the three quantities except at the 2CK point on this line where the universal ratios can be formed. At the 2IK point on this line, σ(T)~2σu(1+aT3/2), no universal ratio can be formed either. The additional non-Fermi-liquid fixed point found by MF has the same symmetry as the 2IK, it has two relevant directions with scaling dimension 12, and is therefore also unstable. The leading low-temperature behaviors are Cimp~T,χhimp~lnT,σ(T)~2σu(1+aT3/2) no universal ratios can be formed. The system is shown to flow to a line of Fermi-liquid fixed points which continuously interpolates between the noninteracting fixed point and the two-channel spin-flavor Kondo fixed point discussed by the author previously. The effect of particle-hole symmetry breaking is discussed. The effective Hamiltonian in the external magnetic field is analyzed. The scaling functions for the physical measurable quantities are derived in the different regimes; their predictions for the experiments are given. Finally the implications are given for a nonmagnetic impurity hopping around three sites with triangular symmetry discussed by MF.

  17. Measuring Efficiency of Tunisian Schools in the Presence of Quasi-Fixed Inputs: A Bootstrap Data Envelopment Analysis Approach

    ERIC Educational Resources Information Center

    Essid, Hedi; Ouellette, Pierre; Vigeant, Stephane

    2010-01-01

    The objective of this paper is to measure the efficiency of high schools in Tunisia. We use a statistical data envelopment analysis (DEA)-bootstrap approach with quasi-fixed inputs to estimate the precision of our measure. To do so, we developed a statistical model serving as the foundation of the data generation process (DGP). The DGP is…

  18. Kinetic quantitation of cerebral PET-FDG studies without concurrent blood sampling: statistical recovery of the arterial input function.

    PubMed

    O'Sullivan, F; Kirrane, J; Muzi, M; O'Sullivan, J N; Spence, A M; Mankoff, D A; Krohn, K A

    2010-03-01

    Kinetic quantitation of dynamic positron emission tomography (PET) studies via compartmental modeling usually requires the time-course of the radio-tracer concentration in the arterial blood as an arterial input function (AIF). For human and animal imaging applications, significant practical difficulties are associated with direct arterial sampling and as a result there is substantial interest in alternative methods that require no blood sampling at the time of the study. A fixed population template input function derived from prior experience with directly sampled arterial curves is one possibility. Image-based extraction, including requisite adjustment for spillover and recovery, is another approach. The present work considers a hybrid statistical approach based on a penalty formulation in which the information derived from a priori studies is combined in a Bayesian manner with information contained in the sampled image data in order to obtain an input function estimate. The absolute scaling of the input is achieved by an empirical calibration equation involving the injected dose together with the subject's weight, height and gender. The technique is illustrated in the context of (18)F -Fluorodeoxyglucose (FDG) PET studies in humans. A collection of 79 arterially sampled FDG blood curves are used as a basis for a priori characterization of input function variability, including scaling characteristics. Data from a series of 12 dynamic cerebral FDG PET studies in normal subjects are used to evaluate the performance of the penalty-based AIF estimation technique. The focus of evaluations is on quantitation of FDG kinetics over a set of 10 regional brain structures. As well as the new method, a fixed population template AIF and a direct AIF estimate based on segmentation are also considered. Kinetics analyses resulting from these three AIFs are compared with those resulting from radially sampled AIFs. The proposed penalty-based AIF extraction method is found to achieve significant improvements over the fixed template and the segmentation methods. As well as achieving acceptable kinetic parameter accuracy, the quality of fit of the region of interest (ROI) time-course data based on the extracted AIF, matches results based on arterially sampled AIFs. In comparison, significant deviation in the estimation of FDG flux and degradation in ROI data fit are found with the template and segmentation methods. The proposed AIF extraction method is recommended for practical use.

  19. Realization of the Temperature Scale in the Range from 234.3 K (Hg Triple Point) to 1084.62°C (Cu Freezing Point) in Croatia

    NASA Astrophysics Data System (ADS)

    Zvizdic, Davor; Veliki, Tomislav; Grgec Bermanec, Lovorka

    2008-06-01

    This article describes the realization of the International Temperature Scale in the range from 234.3 K (mercury triple point) to 1084.62°C (copper freezing point) at the Laboratory for Process Measurement (LPM), Faculty of Mechanical Engineering and Naval Architecture (FSB), University of Zagreb. The system for the realization of the ITS-90 consists of the sealed fixed-point cells (mercury triple point, water triple point and gallium melting point) and the apparatus designed for the optimal realization of open fixed-point cells which include the gallium melting point, tin freezing point, zinc freezing point, aluminum freezing point, and copper freezing point. The maintenance of the open fixed-point cells is described, including the system for filling the cells with pure argon and for maintaining the pressure during the realization.

  20. Two-point function of a d =2 quantum critical metal in the limit kF→∞ , Nf→0 with NfkF fixed

    NASA Astrophysics Data System (ADS)

    Säterskog, Petter; Meszena, Balazs; Schalm, Koenraad

    2017-10-01

    We show that the fermionic and bosonic spectrum of d =2 fermions at finite density coupled to a critical boson can be determined nonperturbatively in the combined limit kF→∞ ,Nf→0 with NfkF fixed. In this double scaling limit, the boson two-point function is corrected but only at one loop. This double scaling limit therefore incorporates the leading effect of Landau damping. The fermion two-point function is determined analytically in real space and numerically in (Euclidean) momentum space. The resulting spectrum is discontinuously connected to the quenched Nf→0 result. For ω →0 with k fixed the spectrum exhibits the distinct non-Fermi-liquid behavior previously surmised from the RPA approximation. However, the exact answer obtained here shows that the RPA result does not fully capture the IR of the theory.

  1. Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes

    NASA Astrophysics Data System (ADS)

    Lavaux, Guilhem; Jasche, Jens

    2016-01-01

    This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.

  2. Next-to-leading order QCD predictions for top-quark pair production with up to three jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Höche, S.; Maierhöfer, P.; Moretti, N.

    2017-03-07

    Here, we present theoretical predictions for the production of top-quark pairs with up to three jets at the next-to leading order in perturbative QCD. The relevant calculations are performed with Sherpa and OpenLoops. In order to address the issue of scale choices and related uncertainties in the presence of multiple scales, we compare results obtained with the standard scale HT/2HT/2 at fixed order and the MiNLO procedure. By analyzing various cross sections and distributions for tmore » $$\\bar{t}$$+0,1,2,3 jets at the 13 TeV LHC we found a remarkable overall agreement between fixed-order and MiNLO results. The differences are typically below the respective factor-two scale variations, suggesting that for all considered jet multiplicities missing higher-order effects should not exceed the ten percent level.« less

  3. Tomlinson-Harashima Precoding for Multiuser MIMO Systems With Quantized CSI Feedback and User Scheduling

    NASA Astrophysics Data System (ADS)

    Sun, Liang; McKay, Matthew R.

    2014-08-01

    This paper studies the sum rate performance of a low complexity quantized CSI-based Tomlinson-Harashima (TH) precoding scheme for downlink multiuser MIMO tansmission, employing greedy user selection. The asymptotic distribution of the output signal to interference plus noise ratio of each selected user and the asymptotic sum rate as the number of users K grows large are derived by using extreme value theory. For fixed finite signal to noise ratios and a finite number of transmit antennas $n_T$, we prove that as K grows large, the proposed approach can achieve the optimal sum rate scaling of the MIMO broadcast channel. We also prove that, if we ignore the precoding loss, the average sum rate of this approach converges to the average sum capacity of the MIMO broadcast channel. Our results provide insights into the effect of multiuser interference caused by quantized CSI on the multiuser diversity gain.

  4. Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlicher, Bob G; Abercrombie, Robert K

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  5. ID201202961, DOE S-124,539, Information Security Analysis Using Game Theory and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abercrombie, Robert K; Schlicher, Bob G

    Information security analysis can be performed using game theory implemented in dynamic simulations of Agent Based Models (ABMs). Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. Our approach addresses imperfect information and scalability that allows us to also address previous limitations of current stochastic game models. Such models only consider perfect information assuming that the defender is always able to detect attacks; assuming that the state transition probabilities are fixed before the game assuming that the players actions aremore » always synchronous; and that most models are not scalable with the size and complexity of systems under consideration. Our use of ABMs yields results of selected experiments that demonstrate our proposed approach and provides a quantitative measure for realistic information systems and their related security scenarios.« less

  6. Wave Impact on a Wall: Comparison of Experiments with Similarity Solutions

    NASA Astrophysics Data System (ADS)

    Wang, A.; Duncan, J. H.; Lathrop, D. P.

    2014-11-01

    The impact of a steep water wave on a fixed partially submerged cube is studied with experiments and theory. The temporal evolution of the water surface profile upstream of the front face of the cube in its center plane is measured with a cinematic laser-induced fluorescence technique using frame rates up to 4,500 Hz. For a small range of cube positions, the surface profiles are found to form a nearly circular arc with upward curvature between the front face of the cube and a point just downstream of the wave crest. As the crest approaches the cube, the effective radius of this portion of the profile decreases rapidly. At the same time, the portion of the profile that is upstream of the crest approaches a straight line with a downward slope of about 15°. As the wave impact continues, the circular arc shrinks to zero radius with very high acceleration and a sudden transition to a high-speed vertical jet occurs. This flow singularity is modeled with a power-law scaling in time, which is used to create a time-independent system of equations of motion. The scaled governing equations are solved numerically and the similarly scaled measured free surface shapes, are favorably compared with the solutions. The support of the Office of Naval Research is gratefully acknowledged.

  7. Quantifying the impact of time-varying baseline risk adjustment in the self-controlled risk interval design.

    PubMed

    Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei

    2015-12-01

    The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.

  8. A dynamical system approach to Bianchi III cosmology for Hu-Sawicki type f( R) gravity

    NASA Astrophysics Data System (ADS)

    Banik, Sebika Kangsha; Banik, Debika Kangsha; Bhuyan, Kalyan

    2018-02-01

    The cosmological dynamics of spatially homogeneous but anisotropic Bianchi type-III space-time is investigated in presence of a perfect fluid within the framework of Hu-Sawicki model. We use the dynamical system approach to perform a detailed analysis of the cosmological behaviour of this model for the model parameters n=1, c_1=1, determining all the fixed points, their stability and corresponding cosmological evolution. We have found stable fixed points with de Sitter solution along with unstable radiation like fixed points. We have identified a matter like point which act like an unstable spiral and when the initial conditions of a trajectory are very close to this point, it stabilizes at a stable accelerating point. Thus, in this model, the universe can naturally approach to a phase of accelerated expansion following a radiation or a matter dominated phase. It is also found that the isotropisation of this model is affected by the spatial curvature and that all the isotropic fixed points are found to be spatially flat.

  9. A fixed false alarm probability figure of merit for gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Wąs, M.; Kalmus, P.; Leong, J. R.; Adams, T.; Leroy, N.; Macleod, D. M.; Pankow, C.; Robinet, F.

    2014-04-01

    Performance of gravitational wave (GW) detectors can be characterized by several figures of merit (FOMs) which are used to guide the detector’s commissioning and operations, and to gauge astrophysical sensitivity. One key FOM is the range in Mpc, averaged over orientation and sky location, at which a GW signal from binary neutron star inspiral and coalescence would have a signal-to-noise ratio (SNR) of 8 in a single detector. This fixed-SNR approach does not accurately reflect the effects of transient noise (glitches), which can severely limit the detectability of transient GW signals expected from a variety of astrophysical sources. We propose a FOM based instead on a fixed false-alarm probability (FAP). This is intended to give a more realistic estimate of the detectable GW transient range including the effect of glitches. Our approach applies equally to individual interferometers or a network of interferometers. We discuss the advantages of the fixed-FAP approach, present examples from a prototype implementation, and discuss the impact it has had on the recent commissioning of the GW detector GEO 600.

  10. A scanning tunneling microscope capable of imaging specified micron-scale small samples.

    PubMed

    Tao, Wei; Cao, Yufei; Wang, Huafeng; Wang, Kaiyou; Lu, Qingyou

    2012-12-01

    We present a home-built scanning tunneling microscope (STM) which allows us to precisely position the tip on any specified small sample or sample feature of micron scale. The core structure is a stand-alone soft junction mechanical loop (SJML), in which a small piezoelectric tube scanner is mounted on a sliding piece and a "U"-like soft spring strip has its one end fixed to the sliding piece and its opposite end holding the tip pointing to the sample on the scanner. Here, the tip can be precisely aligned to a specified small sample of micron scale by adjusting the position of the spring-clamped sample on the scanner in the field of view of an optical microscope. The aligned SJML can be transferred to a piezoelectric inertial motor for coarse approach, during which the U-spring is pushed towards the sample, causing the tip to approach the pre-aligned small sample. We have successfully approached a hand cut tip that was made from 0.1 mm thin Pt∕Ir wire to an isolated individual 32.5 × 32.5 μm(2) graphite flake. Good atomic resolution images and high quality tunneling current spectra for that specified tiny flake are obtained in ambient conditions with high repeatability within one month showing high and long term stability of the new STM structure. In addition, frequency spectra of the tunneling current signals do not show outstanding tip mount related resonant frequency (low frequency), which further confirms the stability of the STM structure.

  11. Improving the Perception of Intelligence: A Short Intervention for Secondary School Students

    ERIC Educational Resources Information Center

    Medina-Garrido, Elena; León, Jaime

    2017-01-01

    Introduction: Holding a fix or an incremental mindset influence academic performance; we wonder if an intervention would change students' mindsets. The main goal of this study was to design and analyse the effectiveness of an easy to scale intervention to diminish students' belief about intelligence as something innate and fix, and think that we…

  12. Measuring the Returns to Lifelong Learning. CEE DP 110

    ERIC Educational Resources Information Center

    Blanden, Jo; Buscha, Franz; Sturgis, Patrick; Urwin, Peter

    2010-01-01

    Using the 1991 to 2007 waves of the UK British Household Panel Survey (BHPS), the authors estimate a fixed effects specification that has as outcomes (i) earnings and (ii) an indicator of social position measured using the CAMSIS scale. Adopting a fixed effects specification enables them to isolate the role of lifelong learning on these two…

  13. Qubit Architecture with High Coherence and Fast Tunable Coupling.

    PubMed

    Chen, Yu; Neill, C; Roushan, P; Leung, N; Fang, M; Barends, R; Kelly, J; Campbell, B; Chen, Z; Chiaro, B; Dunsworth, A; Jeffrey, E; Megrant, A; Mutus, J Y; O'Malley, P J J; Quintana, C M; Sank, D; Vainsencher, A; Wenner, J; White, T C; Geller, Michael R; Cleland, A N; Martinis, John M

    2014-11-28

    We introduce a superconducting qubit architecture that combines high-coherence qubits and tunable qubit-qubit coupling. With the ability to set the coupling to zero, we demonstrate that this architecture is protected from the frequency crowding problems that arise from fixed coupling. More importantly, the coupling can be tuned dynamically with nanosecond resolution, making this architecture a versatile platform with applications ranging from quantum logic gates to quantum simulation. We illustrate the advantages of dynamical coupling by implementing a novel adiabatic controlled-z gate, with a speed approaching that of single-qubit gates. Integrating coherence and scalable control, the introduced qubit architecture provides a promising path towards large-scale quantum computation and simulation.

  14. Scalable boson sampling with time-bin encoding using a loop-based architecture.

    PubMed

    Motes, Keith R; Gilchrist, Alexei; Dowling, Jonathan P; Rohde, Peter P

    2014-09-19

    We present an architecture for arbitrarily scalable boson sampling using two nested fiber loops. The architecture has fixed experimental complexity, irrespective of the size of the desired interferometer, whose scale is limited only by fiber and switch loss rates. The architecture employs time-bin encoding, whereby the incident photons form a pulse train, which enters the loops. Dynamically controlled loop coupling ratios allow the construction of the arbitrary linear optics interferometers required for boson sampling. The architecture employs only a single point of interference and may thus be easier to stabilize than other approaches. The scheme has polynomial complexity and could be realized using demonstrated present-day technologies.

  15. High-z objects and cold dark matter cosmogonies - Constraints on the primordial power spectrum on small scales

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1993-01-01

    Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.

  16. Upper Limit of Weights in TAI Computation

    NASA Technical Reports Server (NTRS)

    Thomas, Claudine; Azoubib, Jacques

    1996-01-01

    The international reference time scale International Atomic Time (TAI) computed by the Bureau International des Poids et Mesures (BIPM) relies on a weighted average of data from a large number of atomic clocks. In it, the weight attributed to a given clock depends on its long-term stability. In this paper the TAI algorithm is used as the basis for a discussion of how to implement an upper limit of weight for clocks contributing to the ensemble time. This problem is approached through the comparison of two different techniques. In one case, a maximum relative weight is fixed: no individual clock can contribute more than a given fraction to the resulting time scale. The weight of each clock is then adjusted according to the qualities of the whole set of contributing elements. In the other case, a parameter characteristic of frequency stability is chosen: no individual clock can appear more stable than the stated limit. This is equivalent to choosing an absolute limit of weight and attributing this to to the most stable clocks independently of the other elements of the ensemble. The first technique is more robust than the second and automatically optimizes the stability of the resulting time scale, but leads to a more complicated computatio. The second technique has been used in the TAI algorithm since the very beginning. Careful analysis of tests on real clock data shows that improvement of the stability of the time scale requires revision from time to time of the fixed value chosen for the upper limit of absolute weight. In particular, we present results which confirm the decision of the CCDS Working Group on TAI to increase the absolute upper limit by a factor of 2.5. We also show that the use of an upper relative contribution further helps to improve the stability and may be a useful step towards better use of the massive ensemble of HP 507IA clocks now contributing to TAI.

  17. Technique for fixing a temporalis muscle using a titanium plate to the implanted hydroxyapatite ceramics for bone defects.

    PubMed

    Ono, I; Tateshita, T; Sasaki, T; Matsumoto, M; Kodama, N

    2001-05-01

    We devised a technique to fix the temporalis muscle to the transplanted hydroxyapatite implant by using a titanium plate, which is fixed to the hydroxyapatite ceramic implant by screws and achieves good clinical results. The size, shape, and curvature of the hydroxyapatite ceramic implants were determined according to full-scale models fabricated using the laser lithographic modeling method from computed tomography data. A titanium plate was then fixed with screws on the implant before implantation, and then the temporalis muscle was refixed to the holes at both ends of the plate. The application of this technique reduced the hospitalization time and achieved good results esthetically.

  18. Neighborhood-Scale Spatial Models of Diesel Exhaust Concentration Profile Using 1-Nitropyrene and Other Nitroarenes

    PubMed Central

    Schulte, Jill K.; Fox, Julie R.; Oron, Assaf P.; Larson, Timothy V.; Simpson, Christopher D.; Paulsen, Michael; Beaudet, Nancy; Kaufman, Joel D.; Magzamen, Sheryl

    2016-01-01

    With emerging evidence that diesel exhaust exposure poses distinct risks to human health, the need for fine-scale models of diesel exhaust pollutants is growing. We modeled the spatial distribution of several nitrated polycyclic aromatic hydrocarbons (NPAHs) to identify fine-scale gradients in diesel exhaust pollution in two Seattle, WA neighborhoods. Our modeling approach fused land-use regression, meteorological dispersion modeling, and pollutant monitoring from both fixed and mobile platforms. We applied these modeling techniques to concentrations of 1-nitropyrene (1-NP), a highly specific diesel exhaust marker, at the neighborhood scale. We developed models of two additional nitroarenes present in secondary organic aerosol: 2-nitro-pyrene and 2-nitrofluoranthene. Summer predictors of 1-NP, including distance to railroad, truck emissions, and mobile black carbon measurements, showed a greater specificity to diesel sources than predictors of other NPAHs. Winter sampling results did not yield stable models, likely due to regional mixing of pollutants in turbulent weather conditions. The model of summer 1-NP had an R2 of 0.87 and cross-validated R2 of 0.73. The synthesis of high-density sampling and hybrid modeling was successful in predicting diesel exhaust pollution at a very fine scale and identifying clear gradients in NPAH concentrations within urban neighborhoods. PMID:26501773

  19. Predicting the propagation of concentration and saturation fronts in fixed-bed filters.

    PubMed

    Callery, O; Healy, M G

    2017-10-15

    The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Biomarker discovery for colon cancer using a 761 gene RT-PCR assay.

    PubMed

    Clark-Langone, Kim M; Wu, Jenny Y; Sangli, Chithra; Chen, Angela; Snable, James L; Nguyen, Anhthu; Hackett, James R; Baker, Joffre; Yothers, Greg; Kim, Chungyeul; Cronin, Maureen T

    2007-08-15

    Reverse transcription PCR (RT-PCR) is widely recognized to be the gold standard method for quantifying gene expression. Studies using RT-PCR technology as a discovery tool have historically been limited to relatively small gene sets compared to other gene expression platforms such as microarrays. We have recently shown that TaqMan RT-PCR can be scaled up to profile expression for 192 genes in fixed paraffin-embedded (FPE) clinical study tumor specimens. This technology has also been used to develop and commercialize a widely used clinical test for breast cancer prognosis and prediction, the Onco typeDX assay. A similar need exists in colon cancer for a test that provides information on the likelihood of disease recurrence in colon cancer (prognosis) and the likelihood of tumor response to standard chemotherapy regimens (prediction). We have now scaled our RT-PCR assay to efficiently screen 761 biomarkers across hundreds of patient samples and applied this process to biomarker discovery in colon cancer. This screening strategy remains attractive due to the inherent advantages of maintaining platform consistency from discovery through clinical application. RNA was extracted from formalin fixed paraffin embedded (FPE) tissue, as old as 28 years, from 354 patients enrolled in NSABP C-01 and C-02 colon cancer studies. Multiplexed reverse transcription reactions were performed using a gene specific primer pool containing 761 unique primers. PCR was performed as independent TaqMan reactions for each candidate gene. Hierarchal clustering demonstrates that genes expected to co-express form obvious, distinct and in certain cases very tightly correlated clusters, validating the reliability of this technical approach to biomarker discovery. We have developed a high throughput, quantitatively precise multi-analyte gene expression platform for biomarker discovery that approaches low density DNA arrays in numbers of genes analyzed while maintaining the high specificity, sensitivity and reproducibility that are characteristics of RT-PCR. Biomarkers discovered using this approach can be transferred to a clinical reference laboratory setting without having to re-validate the assay on a second technology platform.

  1. Unification of Intercontinental Height Systems based on the Fixed Geodetic Boundary Value Problem - A Case Study in Spherical Approximation

    NASA Astrophysics Data System (ADS)

    Grombein, T.; Seitz, K.; Heck, B.

    2013-12-01

    In general, national height reference systems are related to individual vertical datums defined by specific tide gauges. The discrepancy of these vertical datums causes height system biases that range in an order of 1-2 m at a global scale. Continental height systems can be connected by spirit leveling and gravity measurements along the leveling lines as performed for the definition of the European Vertical Reference Frame. In order to unify intercontinental height systems, an indirect connection is needed. For this purpose, global geopotential models derived from recent satellite missions like GOCE provide an important contribution. However, to achieve a highly-precise solution, a combination with local terrestrial gravity data is indispensable. Such combinations result in the solution of a Geodetic Boundary Value Problem (GBVP). In contrast to previous studies, mostly related to the traditional (scalar) free GBVP, the present paper discusses the use of the fixed GBVP for height system unification, where gravity disturbances instead of gravity anomalies are applied as boundary values. The basic idea of our approach is a conversion of measured gravity anomalies to gravity disturbances, where unknown datum parameters occur that can be associated with height system biases. In this way, the fixed GBVP can be extended by datum parameters for each datum zone. By evaluating the GBVP at GNSS/leveling benchmarks, the unknown datum parameters can be estimated in a least squares adjustment. Beside the developed theory, we present numerical results of a case study based on the spherical fixed GBVP and boundary values simulated by the use of the global geopotential model EGM2008. In a further step, the impact of approximations like linearization as well as topographic and ellipsoidal effects is taken into account by suitable reduction and correction terms.

  2. Paleomagnetic Tests of Global Plate Reconstructions with Fixed and Moving Hotspots

    NASA Astrophysics Data System (ADS)

    Andrews, D. L.; Gordon, R. G.; Horner-Johnson, B. C.

    2004-12-01

    Three distinct approaches have been used in prior work to estimate the motion of the Pacific basin plates relative to the surrounding continents. The first approach is to use the global plate motion circuit through Antarctica (e.g., the Pacific plate to the Antarctic plate to the African plate to the North American plate). An update to this approach is to incorporate the modest mid-Tertiary motion between East and West Antarctica estimated by Cande et al. (2000). A recently proposed second approach is to take an alternative circuit for the early Tertiary of the Pacific plate to the Australian plate to the East Antarctic plate to the African plate to the North American plate (Steinberger et al. 2004). The third approach is to assume that the hotspots in the Pacific Ocean are fixed relative to those in the Atlantic and Indian Oceans (e.g., Engebretson et al., 1986), which we recently showed indicates motion between East and West Antarctica of 800 ± 500 km near the Ross Sea Embayment. The first approach (global plate motion circuit through Antarctica) indicates very rapid motion between Pacific and Indo-Atlantic hotspots during the early Tertiary (e.g., Raymond et al. 2000). The second approach (global plate motion circuit through Australia) indicates slower, but still substantial, motion between Pacific and Indo-Atlantic hotspots (Steinberger et al. 2004). Because each of the three approaches predicts distinctly different motion between the Pacific plate and the continental plates, they can be tested with paleomagnetic data. The results of such tests indicate that the first approach leads to systematic and significant misfits between Pacific and non-Pacific early Tertiary and Late Cretaceous paleomagnetic poles. The second approach leads to slightly smaller misfits. In contrast, the circuit based on fixed hotspots brings the Pacific and non-Pacific paleomagnetic poles into consistency. Thus the paleomagnetic data decisively favor fixed hotspots over the alternative approaches and suggests that motion between hotspots is substantially less than inferred by Steinberger et al. (2004).

  3. Natural Monopoly in Principles Textbooks: A Pedagogical Note.

    ERIC Educational Resources Information Center

    Ulbrich, Holley H.

    1991-01-01

    Argues that the textbook presentation of the concept of natural monopolies has changed little since the early 1960s. Suggests that most economics textbooks have ignored the issue of economies of scale versus fixed costs. Notes that educators often discuss economies of scale without explaining why certain industries enjoy greater scale economies…

  4. Compatibility of separatrix density scaling for divertor detachment with H-mode pedestal operation in DIII-D

    NASA Astrophysics Data System (ADS)

    Leonard, A. W.; McLean, A. G.; Makowski, M. A.; Stangeby, P. C.

    2017-08-01

    The midplane separatrix density is characterized in response to variations in upstream parallel heat flux density and central density through deuterium gas injection. The midplane density is determined from a high spatial resolution Thomson scattering diagnostic at the midplane with power balance analysis to determine the separatrix location. The heat flux density is varied by scans of three parameters, auxiliary heating, toroidal field with fixed plasma current, and plasma current with fixed safety factor, q 95. The separatrix density just before divertor detachment onset is found to scale consistent with the two-point model when radiative dissipation is taken into account. The ratio of separatrix to pedestal density, n e,sep/n e,ped varies from  ⩽30% to  ⩾60% over the dataset, helping to resolve the conflicting scaling of core plasma density limit and divertor detachment onset. The scaling of the separatrix density at detachment onset is combined with H-mode power threshold scaling to obtain a scaling ratio of minimum n e,sep/n e,ped expected in future devices.

  5. A Mobile Sensor Network to Map CO2 in Urban Environments

    NASA Astrophysics Data System (ADS)

    Lee, J.; Christen, A.; Nesic, Z.; Ketler, R.

    2014-12-01

    Globally, an estimated 80% of all fuel-based CO2 emissions into the atmosphere are attributable to cities, but there is still a lack of tools to map, visualize and monitor emissions to the scales at which emissions reduction strategies can be implemented - the local and urban scale. Mobile CO2 sensors, such as those attached to taxis and other existing mobile platforms, may be a promising way to observe and map CO2 mixing ratios across heterogenous urban environments with a limited number of sensors. Emerging modular open source technologies, and inexpensive compact sensor components not only enable rapid prototyping and replication, but also are allowing for the miniaturization and mobilization of traditionally fixed sensor networks. We aim to optimize the methods and technologies for monitoring CO2 in cities using a network of CO2 sensors deployable on vehicles and bikes. Our sensor technology is contained in a compact weather-proof case (35.8cm x 27.8cm x 11.8cm), powered independently by battery or by car, and includes the Li-Cor Li-820 infrared gas analyzer (Licor Inc, lincoln, NB, USA), Arduino Mega microcontroller (Arduino CC, Italy) and Adafruit GPS (Adafruit Technologies, NY, USA), and digital air temperature thermometer which measure CO2 mixing ratios (ppm), geolocation and speed, pressure and temperature, respectively at 1-second intervals. With the deployment of our sensor technology, we will determine if such a semi-autonomous mobile approach to monitoring CO2 in cities can determine excess urban CO2 mixing ratios (i.e. the 'urban CO2 dome') when compared to values measured at a fixed, remote background site. We present results from a pilot study in Vancouver, BC, where the a network of our new sensors was deployed both in fixed network and in a mobile campaign and examine the spatial biases of the two methods.

  6. Advancing UAS methods for monitoring coastal environments

    NASA Astrophysics Data System (ADS)

    Ridge, J.; Seymour, A.; Rodriguez, A. B.; Dale, J.; Newton, E.; Johnston, D. W.

    2017-12-01

    Utilizing fixed-wing Unmanned Aircraft Systems (UAS), we are working to improve coastal monitoring by increasing the accuracy, precision, temporal resolution, and spatial coverage of habitat distribution maps. Generally, multirotor aircraft are preferred for precision imaging, but recent advances in fixed-wing technology have greatly increased their capabilities and application for fine-scale (decimeter-centimeter) measurements. Present mapping methods employed by North Carolina coastal managers involve expensive, time consuming and localized observation of coastal environments, which often lack the necessary frequency to make timely management decisions. For example, it has taken several decades to fully map oyster reefs along the NC coast, making it nearly impossible to track trends in oyster reef populations responding to harvesting pressure and water quality degradation. It is difficult for the state to employ manned flights for collecting aerial imagery to monitor intertidal oyster reefs, because flights are usually conducted after seasonal increases in turbidity. In addition, post-storm monitoring of coastal erosion from manned platforms is often conducted days after the event and collects oblique aerial photographs which are difficult to use for accurately measuring change. Here, we describe how fixed wing UAS and standard RGB sensors can be used to rapidly quantify and assess critical coastal habitats (e.g., barrier islands, oyster reefs, etc.), providing for increased temporal frequency to isolate long-term and event-driven (storms, harvesting) impacts. Furthermore, drone-based approaches can accurately image intertidal habitats as well as resolve information such as vegetation density and bathymetry from shallow submerged areas. We obtain UAS imagery of a barrier island and oyster reefs under ideal conditions (low tide, turbidity, and sun angle) to create high resolution (cm scale) maps and digital elevation models to assess habitat condition. Concurrently, we test the accuracy of UAS platforms and image analysis tools against traditional high-resolution mapping equipment (GPS and terrestrial lidar) and in situ sampling (density quadrats) to conduct error analysis of UAS orthoimagery and data processing.

  7. Spatiotemporal Determinants of Urban Leptospirosis Transmission: Four-Year Prospective Cohort Study of Slum Residents in Brazil.

    PubMed

    Hagan, José E; Moraga, Paula; Costa, Federico; Capian, Nicolas; Ribeiro, Guilherme S; Wunder, Elsio A; Felzemburgh, Ridalva D M; Reis, Renato B; Nery, Nivison; Santana, Francisco S; Fraga, Deborah; Dos Santos, Balbino L; Santos, Andréia C; Queiroz, Adriano; Tassinari, Wagner; Carvalho, Marilia S; Reis, Mitermayer G; Diggle, Peter J; Ko, Albert I

    2016-01-01

    Rat-borne leptospirosis is an emerging zoonotic disease in urban slum settlements for which there are no adequate control measures. The challenge in elucidating risk factors and informing approaches for prevention is the complex and heterogeneous environment within slums, which vary at fine spatial scales and influence transmission of the bacterial agent. We performed a prospective study of 2,003 slum residents in the city of Salvador, Brazil during a four-year period (2003-2007) and used a spatiotemporal modelling approach to delineate the dynamics of leptospiral transmission. Household interviews and Geographical Information System surveys were performed annually to evaluate risk exposures and environmental transmission sources. We completed annual serosurveys to ascertain leptospiral infection based on serological evidence. Among the 1,730 (86%) individuals who completed at least one year of follow-up, the infection rate was 35.4 (95% CI, 30.7-40.6) per 1,000 annual follow-up events. Male gender, illiteracy, and age were independently associated with infection risk. Environmental risk factors included rat infestation (OR 1.46, 95% CI, 1.00-2.16), contact with mud (OR 1.57, 95% CI 1.17-2.17) and lower household elevation (OR 0.92 per 10m increase in elevation, 95% CI 0.82-1.04). The spatial distribution of infection risk was highly heterogeneous and varied across small scales. Fixed effects in the spatiotemporal model accounted for the majority of the spatial variation in risk, but there was a significant residual component that was best explained by the spatial random effect. Although infection risk varied between years, the spatial distribution of risk associated with fixed and random effects did not vary temporally. Specific "hot-spots" consistently had higher transmission risk during study years. The risk for leptospiral infection in urban slums is determined in large part by structural features, both social and environmental. Our findings indicate that topographic factors such as household elevation and inadequate drainage increase risk by promoting contact with mud and suggest that the soil-water interface serves as the environmental reservoir for spillover transmission. The use of a spatiotemporal approach allowed the identification of geographic outliers with unexplained risk patterns. This approach, in addition to guiding targeted community-based interventions and identifying new hypotheses, may have general applicability towards addressing environmentally-transmitted diseases that have emerged in complex urban slum settings.

  8. Fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks.

    PubMed

    Li, Hongfei; Li, Chuandong; Huang, Tingwen; Zhang, Wanli

    2018-02-01

    This article is concerned with the fixed-time stabilization for impulsive Cohen-Grossberg BAM neural networks via two different controllers. By using a novel constructive approach based on some comparison techniques for differential inequalities, an improvement theorem of fixed-time stability for impulsive dynamical systems is established. In addition, based on the fixed-time stability theorem of impulsive dynamical systems, two different control protocols are designed to ensure the fixed-time stabilization of impulsive Cohen-Grossberg BAM neural networks, which include and extend the earlier works. Finally, two simulations examples are provided to illustrate the validity of the proposed theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Metabolic profiling of two maize (Zea mays L.) inbred lines inoculated with the nitrogen fixing plant-interacting bacteria Herbaspirillum seropedicae and Azospirillum brasilense

    PubMed Central

    Brusamarello-Santos, Liziane Cristina; Gilard, Françoise; Brulé, Lenaïg; Quilleré, Isabelle; Gourion, Benjamin; Ratet, Pascal; Maltempi de Souza, Emanuel; Lea, Peter J.; Hirel, Bertrand

    2017-01-01

    Maize roots can be colonized by free-living atmospheric nitrogen (N2)-fixing bacteria (diazotrophs). However, the agronomic potential of non-symbiotic N2-fixation in such an economically important species as maize, has still not been fully exploited. A preliminary approach to improve our understanding of the mechanisms controlling the establishment of such N2-fixing associations has been developed, using two maize inbred lines exhibiting different physiological characteristics. The bacterial-plant interaction has been characterized by means of a metabolomic approach. Two established model strains of Nif+ diazotrophic bacteria, Herbaspirillum seropedicae and Azospirillum brasilense and their Nif- couterparts defficient in nitrogenase activity, were used to evaluate the impact of the bacterial inoculation and of N2 fixation on the root and leaf metabolic profiles. The two N2-fixing bacteria have been used to inoculate two genetically distant maize lines (FV252 and FV2), already characterized for their contrasting physiological properties. Using a well-controlled gnotobiotic experimental system that allows inoculation of maize plants with the two diazotrophs in a N-free medium, we demonstrated that both maize lines were efficiently colonized by the two bacterial species. We also showed that in the early stages of plant development, both bacterial strains were able to reduce acetylene, suggesting that they contain functional nitrogenase activity and are able to efficiently fix atmospheric N2 (Fix+). The metabolomic approach allowed the identification of metabolites in the two maize lines that were representative of the N2 fixing plant-bacterial interaction, these included mannitol and to a lesser extend trehalose and isocitrate. Whilst other metabolites such as asparagine, although only exhibiting a small increase in maize roots following bacterial infection, were specific for the two Fix+ bacterial strains, in comparison to their Fix- counterparts. Moreover, a number of metabolites exhibited a maize-genotype specific pattern of accumulation, suggesting that the highly diverse maize genetic resources could be further exploited in terms of beneficial plant-bacterial interactions for optimizing maize growth, with reduced N fertilization inputs. PMID:28362815

  10. Metabolic profiling of two maize (Zea mays L.) inbred lines inoculated with the nitrogen fixing plant-interacting bacteria Herbaspirillum seropedicae and Azospirillum brasilense.

    PubMed

    Brusamarello-Santos, Liziane Cristina; Gilard, Françoise; Brulé, Lenaïg; Quilleré, Isabelle; Gourion, Benjamin; Ratet, Pascal; Maltempi de Souza, Emanuel; Lea, Peter J; Hirel, Bertrand

    2017-01-01

    Maize roots can be colonized by free-living atmospheric nitrogen (N2)-fixing bacteria (diazotrophs). However, the agronomic potential of non-symbiotic N2-fixation in such an economically important species as maize, has still not been fully exploited. A preliminary approach to improve our understanding of the mechanisms controlling the establishment of such N2-fixing associations has been developed, using two maize inbred lines exhibiting different physiological characteristics. The bacterial-plant interaction has been characterized by means of a metabolomic approach. Two established model strains of Nif+ diazotrophic bacteria, Herbaspirillum seropedicae and Azospirillum brasilense and their Nif- couterparts defficient in nitrogenase activity, were used to evaluate the impact of the bacterial inoculation and of N2 fixation on the root and leaf metabolic profiles. The two N2-fixing bacteria have been used to inoculate two genetically distant maize lines (FV252 and FV2), already characterized for their contrasting physiological properties. Using a well-controlled gnotobiotic experimental system that allows inoculation of maize plants with the two diazotrophs in a N-free medium, we demonstrated that both maize lines were efficiently colonized by the two bacterial species. We also showed that in the early stages of plant development, both bacterial strains were able to reduce acetylene, suggesting that they contain functional nitrogenase activity and are able to efficiently fix atmospheric N2 (Fix+). The metabolomic approach allowed the identification of metabolites in the two maize lines that were representative of the N2 fixing plant-bacterial interaction, these included mannitol and to a lesser extend trehalose and isocitrate. Whilst other metabolites such as asparagine, although only exhibiting a small increase in maize roots following bacterial infection, were specific for the two Fix+ bacterial strains, in comparison to their Fix- counterparts. Moreover, a number of metabolites exhibited a maize-genotype specific pattern of accumulation, suggesting that the highly diverse maize genetic resources could be further exploited in terms of beneficial plant-bacterial interactions for optimizing maize growth, with reduced N fertilization inputs.

  11. Using stochastic dynamic programming to support catchment-scale water resources management in China

    NASA Astrophysics Data System (ADS)

    Davidsen, Claus; Pereira-Cardenal, Silvio Javier; Liu, Suxia; Mo, Xingguo; Rosbjerg, Dan; Bauer-Gottwein, Peter

    2013-04-01

    A hydro-economic modelling approach is used to optimize reservoir management at river basin level. We demonstrate the potential of this integrated approach on the Ziya River basin, a complex basin on the North China Plain south-east of Beijing. The area is subject to severe water scarcity due to low and extremely seasonal precipitation, and the intense agricultural production is highly dependent on irrigation. Large reservoirs provide water storage for dry months while groundwater and the external South-to-North Water Transfer Project are alternative sources of water. An optimization model based on stochastic dynamic programming has been developed. The objective function is to minimize the total cost of supplying water to the users, while satisfying minimum ecosystem flow constraints. Each user group (agriculture, domestic and industry) is characterized by fixed demands, fixed water allocation costs for the different water sources (surface water, groundwater and external water) and fixed costs of water supply curtailment. The multiple reservoirs in the basin are aggregated into a single reservoir to reduce the dimensions of decisions. Water availability is estimated using a hydrological model. The hydrological model is based on the Budyko framework and is forced with 51 years of observed daily rainfall and temperature data. 23 years of observed discharge from an in-situ station located downstream a remote mountainous catchment is used for model calibration. Runoff serial correlation is described by a Markov chain that is used to generate monthly runoff scenarios to the reservoir. The optimal costs at a given reservoir state and stage were calculated as the minimum sum of immediate and future costs. Based on the total costs for all states and stages, water value tables were generated which contain the marginal value of stored water as a function of the month, the inflow state and the reservoir state. The water value tables are used to guide allocation decisions in simulation mode. The performance of the operation rules based on water value tables was evaluated. The approach was used to assess the performance of alternative development scenarios and infrastructure projects successfully in the case study region.

  12. A unified monolithic approach for multi-fluid flows and fluid-structure interaction using the Particle Finite Element Method with fixed mesh

    NASA Astrophysics Data System (ADS)

    Becker, P.; Idelsohn, S. R.; Oñate, E.

    2015-06-01

    This paper describes a strategy to solve multi-fluid and fluid-structure interaction (FSI) problems using Lagrangian particles combined with a fixed finite element (FE) mesh. Our approach is an extension of the fluid-only PFEM-2 (Idelsohn et al., Eng Comput 30(2):2-2, 2013; Idelsohn et al., J Numer Methods Fluids, 2014) which uses explicit integration over the streamlines to improve accuracy. As a result, the convective term does not appear in the set of equations solved on the fixed mesh. Enrichments in the pressure field are used to improve the description of the interface between phases.

  13. Adding localization information in a fingerprint binary feature vector representation

    NASA Astrophysics Data System (ADS)

    Bringer, Julien; Despiegel, Vincent; Favre, Mélanie

    2011-06-01

    At BTAS'10, a new framework to transform a fingerprint minutiae template into a binary feature vector of fixed length is described. A fingerprint is characterized by its similarity with a fixed number set of representative local minutiae vicinities. This approach by representative leads to a fixed length binary representation, and, as the approach is local, it enables to deal with local distortions that may occur between two acquisitions. We extend this construction to incorporate additional information in the binary vector, in particular on localization of the vicinities. We explore the use of position and orientation information. The performance improvement is promising for utilization into fast identification algorithms or into privacy protection algorithms.

  14. How Big is Too Big for Hubs: Marginal Profitability in Hub-and-Spoke Networks

    NASA Technical Reports Server (NTRS)

    Ross, Leola B.; Schmidt, Stephen J.

    1997-01-01

    Increasing the scale of hub operations at major airports has led to concerns about congestion at excessively large hubs. In this paper, we estimate the marginal cost of adding spokes to an existing hub network. We observe entry/non-entry decisions on potential spokes from existing hubs, and estimate both a variable profit function for providing service in markets using that spoke as well as the fixed costs of providing service to the spoke. We let the fixed costs depend upon the scale of operations at the hub, and find the hub size at which spoke service costs are minimized.

  15. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs.

    PubMed

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-13

    The establishment of the Aircraft Dynamic Model(ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter(EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters.

  16. Calculation and Identification of the Aerodynamic Parameters for Small-Scaled Fixed-Wing UAVs

    PubMed Central

    Shen, Jieliang; Su, Yan; Liang, Qing; Zhu, Xinhua

    2018-01-01

    The establishment of the Aircraft Dynamic Model (ADM) constitutes the prerequisite for the design of the navigation and control system, but the aerodynamic parameters in the model could not be readily obtained especially for small-scaled fixed-wing UAVs. In this paper, the procedure of computing the aerodynamic parameters is developed. All the longitudinal and lateral aerodynamic derivatives are firstly calculated through semi-empirical method based on the aerodynamics, rather than the wind tunnel tests or fluid dynamics software analysis. Secondly, the residuals of each derivative are proposed to be identified or estimated further via Extended Kalman Filter (EKF), with the observations of the attitude and velocity from the airborne integrated navigation system. Meanwhile, the observability of the targeted parameters is analyzed and strengthened through multiple maneuvers. Based on a small-scaled fixed-wing aircraft driven by propeller, the airborne sensors are chosen and the model of the actuators are constructed. Then, real flight tests are implemented to verify the calculation and identification process. Test results tell the rationality of the semi-empirical method and show the improvement of accuracy of ADM after the compensation of the parameters. PMID:29342856

  17. Ecosystem-level consequences of symbiont partnerships in an N-fixing shrub from interior Alaskan floodplains

    Treesearch

    R.W. Ruess; M.D. Anderson; J.W. McFarland; K. Kielland; K. Olson; D.L. Taylor

    2013-01-01

    In long-lived N-fixing plants, environmental conditions affecting plant growth and N demand vary at multiple temporal and spatial scales, and symbiont assemblages on a given host and patterns of allocation to nodule activities have been shown to vary according to environmental factors, suggesting that hosts may alter partner choice and manipulate symbiont assemblages...

  18. Functional Single-Cell Approach to Probing Nitrogen-Fixing Bacteria in Soil Communities by Resonance Raman Spectroscopy with 15N2 Labeling.

    PubMed

    Cui, Li; Yang, Kai; Li, Hong-Zhe; Zhang, Han; Su, Jian-Qiang; Paraskevaidi, Maria; Martin, Francis L; Ren, Bin; Zhu, Yong-Guan

    2018-04-17

    Nitrogen (N) fixation is the conversion of inert nitrogen gas (N 2 ) to bioavailable N essential for all forms of life. N 2 -fixing microorganisms (diazotrophs), which play a key role in global N cycling, remain largely obscure because a large majority are uncultured. Direct probing of active diazotrophs in the environment is still a major challenge. Herein, a novel culture-independent single-cell approach combining resonance Raman (RR) spectroscopy with 15 N 2 stable isotope probing (SIP) was developed to discern N 2 -fixing bacteria in a complex soil community. Strong RR signals of cytochrome c (Cyt c, frequently present in diverse N 2 -fixing bacteria), along with a marked 15 N 2 -induced Cyt c band shift, generated a highly distinguishable biomarker for N 2 fixation. 15 N 2 -induced shift was consistent well with 15 N abundance in cell determined by isotope ratio mass spectroscopy. By applying this biomarker and Raman imaging, N 2 -fixing bacteria in both artificial and complex soil communities were discerned and imaged at the single-cell level. The linear band shift of Cyt c versus 15 N 2 percentage allowed quantification of N 2 fixation extent of diverse soil bacteria. This single-cell approach will advance the exploration of hitherto uncultured diazotrophs in diverse ecosystems.

  19. Route Optimization for Offloading Congested Meter Fixes

    NASA Technical Reports Server (NTRS)

    Xue, Min; Zelinski, Shannon

    2016-01-01

    The Optimized Route Capability (ORC) concept proposed by the FAA facilitates traffic managers to identify and resolve arrival flight delays caused by bottlenecks formed at arrival meter fixes when there exists imbalance between arrival fixes and runways. ORC makes use of the prediction capability of existing automation tools, monitors the traffic delays based on these predictions, and searches the best reroutes upstream of the meter fixes based on the predictions and estimated arrival schedules when delays are over a predefined threshold. Initial implementation and evaluation of the ORC concept considered only reroutes available at the time arrival congestion was first predicted. This work extends previous work by introducing an additional dimension in reroute options such that ORC can find the best time to reroute and overcome the 'firstcome- first-reroute' phenomenon. To deal with the enlarged reroute solution space, a genetic algorithm was developed to solve this problem. Experiments were conducted using the same traffic scenario used in previous work, when an arrival rush was created for one of the four arrival meter fixes at George Bush Intercontinental Houston Airport. Results showed the new approach further improved delay savings. The suggested route changes from the new approach were on average 30 minutes later than those using other approaches, and fewer numbers of reroutes were required. Fewer numbers of reroutes reduce operational complexity and later reroutes help decision makers deal with uncertain situations.

  20. A systematic approach to the control of esthetic form.

    PubMed

    Preston, J D

    1976-04-01

    A systematic, orderly approach to the problem of establishing harmonious phonetics, esthetics, and function in fixed restorations has been described. The system requires an initial investment of time in performing an adequate diagnostic waxing, but recoups that time in many clinical and laboratory procedures. The method has proved a valuable asset in fixed prosthodontic care. The technique can be expanded and combined with other techniques with a little imagination and artistic bent.

  1. Isolating the atmospheric circulation response to Arctic sea-ice loss in the coupled climate system

    NASA Astrophysics Data System (ADS)

    Kushner, P. J.; Blackport, R.

    2016-12-01

    In the coupled climate system, projected global warming drives extensive sea-ice loss, but sea-ice loss drives warming that amplifies and can be confounded with the global warming process. This makes it challenging to cleanly attribute the atmospheric circulation response to sea-ice loss within coupled earth-system model (ESM) simulations of greenhouse warming. In this study, many centuries of output from coupled ocean/atmosphere/land/sea-ice ESM simulations driven separately by sea-ice albedo reduction and by projected greenhouse-dominated radiative forcing are combined to cleanly isolate the hemispheric scale response of the circulation to sea-ice loss. To isolate the sea-ice loss signal, a pattern scaling approach is proposed in which the local multidecadal mean atmospheric response is assumed to be separately proportional to the total sea-ice loss and to the total low latitude ocean surface warming. The proposed approach estimates the response to Arctic sea-ice loss with low latitude ocean temperatures fixed and vice versa. The sea-ice response includes a high northern latitude easterly zonal wind response, an equatorward shift of the eddy driven jet, a weakening of the stratospheric polar vortex, an anticyclonic sea level pressure anomaly over coastal Eurasia, a cyclonic sea level pressure anomaly over the North Pacific, and increased wintertime precipitation over the west coast of North America. Many of these responses are opposed by the response to low-latitude surface warming with sea ice fixed. However, both sea-ice loss and low latitude surface warming act in concert to reduce storm track strength throughout the mid and high latitudes. The responses are similar in two related versions of the National Center for Atmospheric Research earth system models, apart from the stratospheric polar vortex response. Evidence is presented that internal variability can easily contaminate the estimates if not enough independent climate states are used to construct them.

  2. High resolution modeling of reservoir storage and extent dynamics at the continental scale

    NASA Astrophysics Data System (ADS)

    Shin, S.; Pokhrel, Y. N.

    2017-12-01

    Over the past decade, significant progress has been made in developing reservoir schemes in large scale hydrological models to better simulate hydrological fluxes and storages in highly managed river basins. These schemes have been successfully used to study the impact of reservoir operation on global river basins. However, improvements in the existing schemes are needed for hydrological fluxes and storages, especially at the spatial resolution to be used in hyper-resolution hydrological modeling. In this study, we developed a reservoir routing scheme with explicit representation of reservoir storage and extent at the grid scale of 5km or less. Instead of setting reservoir area to a fixed value or diagnosing it using the area-storage equation, which is a commonly used approach in the existing reservoir schemes, we explicitly simulate the inundated storage and area for all grid cells that are within the reservoir extent. This approach enables a better simulation of river-floodplain-reservoir storage by considering both the natural flood and man-made reservoir storage. Results of the seasonal dynamics of reservoir storage, river discharge at the downstream of dams, and the reservoir inundation extent are evaluated with various datasets from ground-observations and satellite measurements. The new model captures the dynamics of these variables with a good accuracy for most of the large reservoirs in the western United States. It is expected that the incorporation of the newly developed reservoir scheme in large-scale land surface models (LSMs) will lead to improved simulation of river flow and terrestrial water storage in highly managed river basins.

  3. Effect of Impurities on the Freezing Point of Zinc

    NASA Astrophysics Data System (ADS)

    Sun, Jianping; Rudtsch, Steffen; Niu, Yalu; Zhang, Lin; Wang, Wei; Den, Xiaolong

    2017-03-01

    The knowledge of the liquidus slope of impurities in fixed-point metal defined by the International Temperature Scale of 1990 is important for the estimation of uncertainties and correction of fixed point with the sum of individual estimates method. Great attentions are paid to the effect of ultra-trace impurities on the freezing point of zinc in the National Institute of Metrology. In the present work, the liquidus slopes of Ga-Zn, Ge-Zn were measured with the slim fixed-point cell developed through the doping experiments, and the temperature characteristics of the phase diagram of Fe-Zn were furthermore investigated. A quasi-adiabatic Zn fixed-point cell was developed with the thermometer well surrounded by the crucible with the pure metal, and the temperature uniformity of less than 20 mK in the region where the metal is located was obtained. The previous doping experiment of Pb-Zn with slim fixed-point cell was checked with quasi-adiabatic Zn fixed-point cell, and the result supports the previous liquidus slope measured with the traditional fixed-point realization.

  4. Power Laws, Scale Invariance and the Generalized Frobenius Series:

    NASA Astrophysics Data System (ADS)

    Visser, Matt; Yunes, Nicolas

    We present a self-contained formalism for calculating the background solution, the linearized solutions and a class of generalized Frobenius-like solutions to a system of scale-invariant differential equations. We first cast the scale-invariant model into its equidimensional and autonomous forms, find its fixed points, and then obtain power-law background solutions. After linearizing about these fixed points, we find a second linearized solution, which provides a distinct collection of power laws characterizing the deviations from the fixed point. We prove that generically there will be a region surrounding the fixed point in which the complete general solution can be represented as a generalized Frobenius-like power series with exponents that are integer multiples of the exponents arising in the linearized problem. While discussions of the linearized system are common, and one can often find a discussion of power-series with integer exponents, power series with irrational (indeed complex) exponents are much rarer in the extant literature. The Frobenius-like series we encounter can be viewed as a variant of the rarely-discussed Liapunov expansion theorem (not to be confused with the more commonly encountered Liapunov functions and Liapunov exponents). As specific examples we apply these ideas to Newtonian and relativistic isothermal stars and construct two separate power series with the overlapping radius of convergence. The second of these power series solutions represents an expansion around "spatial infinity," and in realistic models it is this second power series that gives information about the stellar core, and the damped oscillations in core mass and core radius as the central pressure goes to infinity. The power-series solutions we obtain extend classical results; as exemplified for instance by the work of Lane, Emden, and Chandrasekhar in the Newtonian case, and that of Harrison, Thorne, Wakano, and Wheeler in the relativistic case. We also indicate how to extend these ideas to situations where fixed points may not exist — either due to "monotone" flow or due to the presence of limit cycles. Monotone flow generically leads to logarithmic deviations from scaling, while limit cycles generally lead to discrete self-similar solutions.

  5. Fix and forget or fix and report: a qualitative study of tensions at the front line of incident reporting.

    PubMed

    Hewitt, Tanya Anne; Chreim, Samia

    2015-05-01

    Practitioners frequently encounter safety problems that they themselves can resolve on the spot. We ask: when faced with such a problem, do practitioners fix it in the moment and forget about it, or do they fix it in the moment and report it? We consider factors underlying these two approaches. We used a qualitative case study design employing in-depth interviews with 40 healthcare practitioners in a tertiary care hospital in Ontario, Canada. We conducted a thematic analysis, and compared the findings with the literature. 'Fixing and forgetting' was the main choice that most practitioners made in situations where they faced problems that they themselves could resolve. These situations included (A) handling near misses, which were seen as unworthy of reporting since they did not result in actual harm to the patient, (B) prioritising solving individual patients' safety problems, which were viewed as unique or one-time events and (C) encountering re-occurring safety problems, which were framed as inevitable, routine events. In only a few instances was 'fixing and reporting' mentioned as a way that the providers dealt with problems that they could resolve. We found that generally healthcare providers do not prioritise reporting if a safety problem is fixed. We argue that fixing and forgetting patient safety problems encountered may not serve patient safety as well as fixing and reporting. The latter approach aligns with recent calls for patient safety to be more preventive. We consider implications for practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  6. MERCURIC CHLORIDE CAPTURE BY ALKALINE SORBENTS

    EPA Science Inventory

    The paper gives results of bench-scale mechanistic studies of mercury/sorbent reactions that showed that mercuric chloride (HgC12) is readily adsorbed by alkaline sorbents, which may offers a less expensive alternative to the use of activated carbons. A laboratory-scale, fixed-b...

  7. Aragonite→calcite transformation studied by EPR of Mn 2+ ions

    NASA Astrophysics Data System (ADS)

    Lech, J.; Śl|zak, A.

    1989-05-01

    The irreversible transformation aragonite→calcite has been studied both at different fixed heating rates (5, 10, 15 and 20 K/min) and at different fixed temperatures. Apparent progression rates of the transformation were observed above 685 K. At 730 K the transformation became sudden and violent. Time developments of the transformation at fixed temperatures have been discussed in terms of Avrami-Lichti's approach to transitions involving nucleation processes.

  8. Efficacy of Dentaq® Oral and ENT Health Probiotic Complex on Clinical Parameters of Gingivitis in Patients Undergoing Fixed Orthodontic Treatment: A Pilot Study.

    PubMed

    Kolip, Duygu; Yılmaz, Nuray; Gökkaya, Berna; Kulan, Pinar; Kargul, Betul; MacDonald, Kyle W; Cadieux, Peter A; Burton, Jeremy P; James, Kris M

    2016-09-01

    Probiotics act as a unique approach to maintaining oral health by supplementing the endogenous oral bacteria with additional naturally occurring beneficial microbes to provide defense against pathogens harmful to teeth and gingiva. The aim of this pilot study was to clinically evaluate the effects of probiotics on plaque accumulation and gingival inflammation in subjects with fixed orthodontics. The pilot study was comprised of 15 healthy patients, aged 11 to 18 years, undergoing fixed orthodontic treatment. Patients used an all-natural, dissolving lozenge containing six proprietary probiotic strains (Dentaq® Oral and ENT Health Probiotic Complex)for 28 days. Gingival Index (GI) according to Löe-Silness and Plaque Index (PI) according to Quigley-Hein for all teeth were measured at baseline (Day Zero) and at the end of the probiotic regimen (Day 28). The mean baseline GI and PI scores within each patient decreased by 28.4% and 35.8%, respectively, by Day 28. Patients reported decreased tooth and gingival pain, decreased oral bleeding, and increased motivation to maintain proper oral hygiene over the course of the study. This pilot study provided preliminary support for the use of Dentaq Oral and ENT Health Probiotic Complex as a safe and effective natural health product for the reduction of plaque accumulation and gingival inflammation. The results demonstrate its potential therapeutic value and open the door for larger scale placebo-controlled clinical studies to verify these findings.

  9. Gasification Product Improvement Facility (GPIF). Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-09-01

    The gasifier selected for development under this contract is an innovative and patented hybrid technology which combines the best features of both fixed-bed and fluidized-bed types. PyGas{trademark}, meaning Pyrolysis Gasification, is well suited for integration into advanced power cycles such as IGCC. It is also well matched to hot gas clean-up technologies currently in development. Unlike other gasification technologies, PyGas can be designed into both large and small scale systems. It is expected that partial repowering with PyGas could be done at a cost of electricity of only 2.78 cents/kWh, more economical than natural gas repowering. It is extremely unfortunatemore » that Government funding for such a noble cause is becoming reduced to the point where current contracts must be canceled. The Gasification Product Improvement Facility (GPIF) project was initiated to provide a test facility to support early commercialization of advanced fixed-bed coal gasification technology at a cost approaching $1,000 per kilowatt for electric power generation applications. The project was to include an innovative, advanced, air-blown, pressurized, fixed-bed, dry-bottom gasifier and a follow-on hot metal oxide gas desulfurization sub-system. To help defray the cost of testing materials, the facility was to be located at a nearby utility coal fired generating site. The patented PyGas{trademark} technology was selected via a competitive bidding process as the candidate which best fit overall DOE objectives. The paper describes the accomplishments to date.« less

  10. Outcomes Assessment of Treating Completely Edentulous Patients with a Fixed Implant-Supported Profile Prosthesis Utilizing a Graftless Approach. Part 1: Clinically Related Outcomes.

    PubMed

    Alzoubi, Fawaz; Bedrossian, Edmond; Wong, Allen; Farrell, Douglas; Park, Chan; Indresano, Thomas

    To assess outcomes of treating completely edentulous patients with a fixed implant-supported profile prosthesis utilizing a graftless approach for the maxilla and for the mandible, with emphasis on clinically related outcomes, specifically implant and prosthesis survival. This was a retrospective study with the following inclusion criteria: completely edentulous patients rehabilitated with a fixed implant-supported profile denture utilizing a graftless approach. Patients fulfilling the inclusion criteria were asked to participate in the study during their follow-up visits, and hence a consecutive sampling strategy was used. Data regarding implant and prosthesis cumulative survival rates (CSRs) were gathered and calculated. Thirty-four patients were identified with a total of 220 implants placed. An overall CSR of 98.2% was recorded with an observation of up to 10 years. For tilted, axial, and zygomatic implants, CSRs of 96.9%, 98.0%, and 100%, respectively, were observed for up to 10 years. For provisional prostheses, CSRs of 92.3% at 1 year, and 84.6% at 2 years were observed. For final prostheses, a CSR of 93.8% was observed at 10 years. The results suggest that treating completely edentulous patients with a fixed profile prosthesis utilizing a graftless approach in the maxilla and the mandible can be a reliable treatment option.

  11. Evaluating the safety risk of roadside features for rural two-lane roads using reliability analysis.

    PubMed

    Jalayer, Mohammad; Zhou, Huaguo

    2016-08-01

    The severity of roadway departure crashes mainly depends on the roadside features, including the sideslope, fixed-object density, offset from fixed objects, and shoulder width. Common engineering countermeasures to improve roadside safety include: cross section improvements, hazard removal or modification, and delineation. It is not always feasible to maintain an object-free and smooth roadside clear zone as recommended in design guidelines. Currently, clear zone width and sideslope are used to determine roadside hazard ratings (RHRs) to quantify the roadside safety of rural two-lane roadways on a seven-point pictorial scale. Since these two variables are continuous and can be treated as random, probabilistic analysis can be applied as an alternative method to address existing uncertainties. Specifically, using reliability analysis, it is possible to quantify roadside safety levels by treating the clear zone width and sideslope as two continuous, rather than discrete, variables. The objective of this manuscript is to present a new approach for defining the reliability index for measuring roadside safety on rural two-lane roads. To evaluate the proposed approach, we gathered five years (2009-2013) of Illinois run-off-road (ROR) crash data and identified the roadside features (i.e., clear zone widths and sideslopes) of 4500 300ft roadway segments. Based on the obtained results, we confirm that reliability indices can serve as indicators to gauge safety levels, such that the greater the reliability index value, the lower the ROR crash rate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. China Report, Science and Technology, White Paper, No. 1

    DTIC Science & Technology

    1987-04-02

    traditional biotechnology to produce liquor, soy sauce, vinegar and other fermented food products. In the late fifties, China established an antibiotic...to transform the traditional fermentation industry, including the use of fixed fungi or fixed cells to make alcohol, beer, soy sauce, vinegar , and...use. We should also improve the techniques and equipment of fermentation , develop 35 the technologies of central heating and small-scale methane

  13. Efficient collective influence maximization in cascading processes with first-order transitions

    PubMed Central

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-01-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches. PMID:28349988

  14. Reconcile Planck-scale discreteness and the Lorentz-Fitzgerald contraction

    NASA Astrophysics Data System (ADS)

    Rovelli, Carlo; Speziale, Simone

    2003-03-01

    A Planck-scale minimal observable length appears in many approaches to quantum gravity. It is sometimes argued that this minimal length might conflict with Lorentz invariance, because a boosted observer can see the minimal length further Lorentz contracted. We show that this is not the case within loop quantum gravity. In loop quantum gravity the minimal length (more precisely, minimal area) does not appear as a fixed property of geometry, but rather as the minimal (nonzero) eigenvalue of a quantum observable. The boosted observer can see the same observable spectrum, with the same minimal area. What changes continuously in the boost transformation is not the value of the minimal length: it is the probability distribution of seeing one or the other of the discrete eigenvalues of the area. We discuss several difficulties associated with boosts and area measurement in quantum gravity. We compute the transformation of the area operator under a local boost, propose an explicit expression for the generator of local boosts, and give the conditions under which its action is unitary.

  15. Automatic script identification from images using cluster-based templates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hochberg, J.; Kerns, L.; Kelly, P.

    We have developed a technique for automatically identifying the script used to generate a document that is stored electronically in bit image form. Our approach differs from previous work in that the distinctions among scripts are discovered by an automatic learning procedure, without any handson analysis. We first develop a set of representative symbols (templates) for each script in our database (Cyrillic, Roman, etc.). We do this by identifying all textual symbols in a set of training documents, scaling each symbol to a fixed size, clustering similar symbols, pruning minor clusters, and finding each cluster`s centroid. To identify a newmore » document`s script, we identify and scale a subset of symbols from the document and compare them to the templates for each script. We choose the script whose templates provide the best match. Our current system distinguishes among the Armenian, Burmese, Chinese, Cyrillic, Ethiopic, Greek, Hebrew, Japanese, Korean, Roman, and Thai scripts with over 90% accuracy.« less

  16. Efficient collective influence maximization in cascading processes with first-order transitions

    NASA Astrophysics Data System (ADS)

    Pei, Sen; Teng, Xian; Shaman, Jeffrey; Morone, Flaviano; Makse, Hernán A.

    2017-03-01

    In many social and biological networks, the collective dynamics of the entire system can be shaped by a small set of influential units through a global cascading process, manifested by an abrupt first-order transition in dynamical behaviors. Despite its importance in applications, efficient identification of multiple influential spreaders in cascading processes still remains a challenging task for large-scale networks. Here we address this issue by exploring the collective influence in general threshold models of cascading process. Our analysis reveals that the importance of spreaders is fixed by the subcritical paths along which cascades propagate: the number of subcritical paths attached to each spreader determines its contribution to global cascades. The concept of subcritical path allows us to introduce a scalable algorithm for massively large-scale networks. Results in both synthetic random graphs and real networks show that the proposed method can achieve larger collective influence given the same number of seeds compared with other scalable heuristic approaches.

  17. Parallelism and Epistasis in Skeletal Evolution Identified through Use of Phylogenomic Mapping Strategies

    PubMed Central

    Daane, Jacob M.; Rohner, Nicolas; Konstantinidis, Peter; Djuranovic, Sergej; Harris, Matthew P.

    2016-01-01

    The identification of genetic mechanisms underlying evolutionary change is critical to our understanding of natural diversity, but is presently limited by the lack of genetic and genomic resources for most species. Here, we present a new comparative genomic approach that can be applied to a broad taxonomic sampling of nonmodel species to investigate the genetic basis of evolutionary change. Using our analysis pipeline, we show that duplication and divergence of fgfr1a is correlated with the reduction of scales within fishes of the genus Phoxinellus. As a parallel genetic mechanism is observed in scale-reduction within independent lineages of cypriniforms, our finding exposes significant developmental constraint guiding morphological evolution. In addition, we identified fixed variation in fgf20a within Phoxinellus and demonstrated that combinatorial loss-of-function of fgfr1a and fgf20a within zebrafish phenocopies the evolved scalation pattern. Together, these findings reveal epistatic interactions between fgfr1a and fgf20a as a developmental mechanism regulating skeletal variation among fishes. PMID:26452532

  18. Exploration of a Preflight Acuity Scale for Fixed Wing Air Ambulance Transport.

    PubMed

    Phipps, Marcy; Conley, Virginia; Constantine, William H

    Despite the prevalence of fixed wing medical flights for specialized care and repatriation, few acuity rating scales exist aimed at the prediction of adverse in-flight medical events. An acuity scoring system can provide information to flight crews, allowing for staffing enhancements, protocol modifications, and flight planning, with the aim of improving patient care, outcomes, and preventing losses to providers because of costly diversions. Our medical crew developed an acuity scale, which was applied retrospectively to 296 patients transported between January 2016 and March 2017. Patients received scores based on conditions identified during the preflight medical report, the initial patient assessment, demographics, and flight factors. Five patients were identified as high-risk transports based on our scale. Three patients suffered adverse events according to our defined criteria, 2 of which occurred before transport and 1 during transport. The 3 patients suffering adverse events did not receive a score that indicated adverse events in flight. Our scale was not predictive of adverse events in flight. However, it did illuminate factors worthy of consideration. Consideration of these factors may have prevented adverse events. Published by Elsevier Inc.

  19. Is there scale-dependent bias in single-field inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Putter, Roland; Doré, Olivier; Green, Daniel, E-mail: rdputter@caltech.edu, E-mail: Olivier.P.Dore@jpl.nasa.gov, E-mail: drgreen@cita.utoronto.ca

    2015-10-01

    Scale-dependent halo bias due to local primordial non-Gaussianity provides a strong test of single-field inflation. While it is universally understood that single-field inflation predicts negligible scale-dependent bias compared to current observational uncertainties, there is still disagreement on the exact level of scale-dependent bias at a level that could strongly impact inferences made from future surveys. In this paper, we clarify this confusion and derive in various ways that there is exactly zero scale-dependent bias in single-field inflation. Much of the current confusion follows from the fact that single-field inflation does predict a mode coupling of matter perturbations at the levelmore » of f{sub NL}{sup local}; ≈ −5/3, which naively would lead to scale-dependent bias. However, we show explicitly that this mode coupling cancels out when perturbations are evaluated at a fixed physical scale rather than fixed coordinate scale. Furthermore, we show how the absence of scale-dependent bias can be derived easily in any gauge. This result can then be incorporated into a complete description of the observed galaxy clustering, including the previously studied general relativistic terms, which are important at the same level as scale-dependent bias of order f{sub NL}{sup local} ∼ 1. This description will allow us to draw unbiased conclusions about inflation from future galaxy clustering data.« less

  20. Model Comparison of Nonlinear Structural Equation Models with Fixed Covariates.

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Song, Xin-Yuan

    2003-01-01

    Proposed a new nonlinear structural equation model with fixed covariates to deal with some complicated substantive theory and developed a Bayesian path sampling procedure for model comparison. Illustrated the approach with an illustrative example using data from an international study. (SLD)

  1. Tests of a 1/7-Scale Semispan Model of the XB-35 Airplane in the Langley 19-Foot Pressure Tunnel

    NASA Technical Reports Server (NTRS)

    Teplitz, Jerome; Kayten, Gerald G.; Cancro, Patrick A.

    1946-01-01

    A 1/7 scale semispan model of the XB-35 airplane was tested in the Langley 10 foot pressure tunnel, primarily for the purpose of investigating the effectiveness of a leading-edge slot for alleviation of stick-fixed longitudinal instability at high angles of attack caused by early tip stalling and a device for relief of stick-free instability caused by elevon up-floating tendencies at high angles of attack. Results indicated that the slot was not adequate to provide the desired improvement in stick-fixed stability. The tab-flipper device provided improvement in stick-free stability abd two of the linkage combinations tested gave satisfactory variations of control force with airspeed for all conditions except that in which the wing-tip "pitch-control" flap was fully deflected. However, the improvement in control force characteristics was accompanied by a detrimental effect on stick-fixed stability because of the pitching moments produced by the elevon tab deflection.

  2. ATLAS: An advanced PCR-method for routine visualization of telomere length in Saccharomyces cerevisiae.

    PubMed

    Zubko, Elena I; Shackleton, Jennifer L; Zubko, Mikhajlo K

    2016-12-01

    Measuring telomere length is essential in telomere biology. Southern blot hybridization is the predominant method for measuring telomere length in the genetic model Saccharomyces cerevisiae. We have further developed and refined a telomere PCR approach, which was rarely used previously (mainly in specific telomeric projects), into a robust method allowing direct visualisation of telomere length differences in routine experiments with S. cerevisiae, and showing a strong correlation of results with data obtained by Southern blot hybridization. In this expanded method denoted as ATLAS (A-dvanced T-elomere L-ength A-nalysis in S. cerevisiae), we have introduced: 1) set of new primers annealing with high specificity to telomeric regions on five different chromosomes; 2) new approach for designing reverse telomere primers that is based on the ligation of an adaptor of a fixed size to telomeric ends. ATLAS can be used at the scale of individual assays and high-throughput approaches. This simple, time/cost-effective and reproducible methodology will complement Southern blot hybridization and facilitate further progress in telomere research. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. A System Approach to Navy Medical Education and Training. Appendix 45. Competency Curricula for Dental Prosthetic Assistant and Dental Prosthetic Technician.

    DTIC Science & Technology

    1974-08-31

    Removable Partial Dentures ..................... 34 XI. Fixed Partial Denture Construction .. ........ 35 l. Construct Master Cast with Removable...Dies . . . 36 2. Construct Patterns for Fixed Partial Dentures .. . ..... 37 3. Spruing and Investing oeu . . . 38 4. Wax Elimination and Casting...42 S. Re3in Jacket Crowns . . ............ 43 9. Temporary Crowns and Fixed Partial Dentures . . 44 10. Post and Core Techniques . . o

  4. Differences in night-time and daytime ambulatory blood pressure when diurnal periods are defined by self-report, fixed-times, and actigraphy: Improving the Detection of Hypertension study.

    PubMed

    Booth, John N; Muntner, Paul; Abdalla, Marwah; Diaz, Keith M; Viera, Anthony J; Reynolds, Kristi; Schwartz, Joseph E; Shimbo, Daichi

    2016-02-01

    To determine whether defining diurnal periods by self-report, fixed-time, or actigraphy produce different estimates of night-time and daytime ambulatory blood pressure (ABP). Over a median of 28 days, 330 participants completed two 24-h ABP and actigraphy monitoring periods with sleep diaries. Fixed night-time and daytime periods were defined as 0000-0600 h and 1000-2000 h, respectively. Using the first ABP period, within-individual differences for mean night-time and daytime ABP and kappa statistics for night-time and daytime hypertension (systolic/diastolic ABP≥120/70 mmHg and ≥135/85 mmHg, respectively) were estimated comparing self-report, fixed-time, or actigraphy for defining diurnal periods. Reproducibility of ABP was also estimated. Within-individual mean differences in night-time systolic ABP were small, suggesting little bias, when comparing the three approaches used to define diurnal periods. The distribution of differences, represented by 95% confidence intervals (CI), in night-time systolic and diastolic ABP and daytime systolic and diastolic ABP was narrowest for self-report versus actigraphy. For example, mean differences (95% CI) in night-time systolic ABP for self-report versus fixed-time was -0.53 (-6.61, +5.56) mmHg, self-report versus actigraphy was 0.91 (-3.61, +5.43) mmHg, and fixed-time versus actigraphy was 1.43 (-5.59, +8.46) mmHg. Agreement for night-time and daytime hypertension was highest for self-report versus actigraphy: kappa statistic (95% CI) = 0.91 (0.86,0.96) and 1.00 (0.98,1.00), respectively. The reproducibility of mean ABP and hypertension categories was similar using each approach. Given the high agreement with actigraphy, these data support using self-report to define diurnal periods on ABP monitoring. Further, the use of fixed-time periods may be a reasonable alternative approach.

  5. Evaluation of a pilot workload metric for simulated VTOL landing tasks

    NASA Technical Reports Server (NTRS)

    North, R. A.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.

  6. HEATHER - HElium Ion Accelerator for RadioTHERapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Jordan; Edgecock, Thomas; Green, Stuart

    2017-05-01

    A non-scaling fixed field alternating gradient (nsFFAG) accelerator is being designed for helium ion therapy. This facility will consist of 2 superconducting rings, treating with helium ions (He²⁺ ) and image with hydrogen ions (H + 2 ). Currently only carbon ions are used to treat cancer, yet there is an increasing interest in the use of lighter ions for therapy. Lighter ions have reduced dose tail beyond the tumour compared to carbon, caused by low Z secondary particles produced via inelastic nuclear reactions. An FFAG approach for helium therapy has never been previously considered. Having demonstrated isochronous acceleration frommore » 0.5 MeV to 900 MeV, we now demonstrate the survival of a realistic beam across both stages.« less

  7. Accounting for nitrogen fixation in simple models of lake nitrogen loading/export.

    PubMed

    Ruan, Xiaodan; Schellenger, Frank; Hellweger, Ferdi L

    2014-05-20

    Coastal eutrophication, an important global environmental problem, is primarily caused by excess nitrogen and management efforts consequently focus on lowering watershed N export (e.g., by reducing fertilizer use). Simple quantitative models are needed to evaluate alternative scenarios at the watershed scale. Existing models generally assume that, for a specific lake/reservoir, a constant fraction of N loading is exported downstream. However, N fixation by cyanobacteria may increase when the N loading is reduced, which may change the (effective) fraction of N exported. Here we present a model that incorporates this process. The model (Fixation and Export of Nitrogen from Lakes, FENL) is based on a steady-state mass balance with loading, output, loss/retention, and N fixation, where the amount fixed is a function of the N/P ratio of the loading (i.e., when N/P is less than a threshold value, N is fixed). Three approaches are used to parametrize and evaluate the model, including microcosm lab experiments, lake field observations/budgets and lake ecosystem model applications. Our results suggest that N export will not be reduced proportionally with N loading, which needs to be considered when evaluating management scenarios.

  8. Effects of training strategies implemented in a complex videogame on functional connectivity of attentional networks.

    PubMed

    Voss, Michelle W; Prakash, Ruchika Shaurya; Erickson, Kirk I; Boot, Walter R; Basak, Chandramallika; Neider, Mark B; Simons, Daniel J; Fabiani, Monica; Gratton, Gabriele; Kramer, Arthur F

    2012-01-02

    We used the Space Fortress videogame, originally developed by cognitive psychologists to study skill acquisition, as a platform to examine learning-induced plasticity of interacting brain networks. Novice videogame players learned Space Fortress using one of two training strategies: (a) focus on all aspects of the game during learning (fixed priority), or (b) focus on improving separate game components in the context of the whole game (variable priority). Participants were scanned during game play using functional magnetic resonance imaging (fMRI), both before and after 20 h of training. As expected, variable priority training enhanced learning, particularly for individuals who initially performed poorly. Functional connectivity analysis revealed changes in brain network interaction reflective of more flexible skill learning and retrieval with variable priority training, compared to procedural learning and skill implementation with fixed priority training. These results provide the first evidence for differences in the interaction of large-scale brain networks when learning with different training strategies. Our approach and findings also provide a foundation for exploring the brain plasticity involved in transfer of trained abilities to novel real-world tasks such as driving, sport, or neurorehabilitation. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Methods to Prescribe Particle Motion to Minimize Quadrature Error in Meshfree Methods

    NASA Astrophysics Data System (ADS)

    Templeton, Jeremy; Erickson, Lindsay; Morris, Karla; Poliakoff, David

    2015-11-01

    Meshfree methods are an attractive approach for simulating material systems undergoing large-scale deformation, such as spray break up, free surface flows, and droplets. Particles, which can be easily moved, are used as nodes and/or quadrature points rather than a relying on a fixed mesh. Most methods move particles according to the local fluid velocity that allows for the convection terms in the Navier-Stokes equations to be easily accounted for. However, this is a trade-off against numerical accuracy as the flow can often move particles to configurations with high quadrature error, and artificial compressibility is often required to prevent particles from forming undesirable regions of high and low concentrations. In this work, we consider the other side of the trade-off: moving particles based on reducing numerical error. Methods derived from molecular dynamics show that particles can be moved to minimize a surrogate for the solution error, resulting in substantially more accurate simulations at a fixed cost. Sandia National Laboratories is a multiprogram laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  10. Implant/tooth-connected restorations utilizing screw-fixed attachments: a survey of 3,096 sites in function for 3 to 14 years.

    PubMed

    Fugazzotto, P A; Kirsch, A; Ackermann, K L; Neuendorff, G

    1999-01-01

    Numerous problems have been reported following various therapies used to attach natural teeth to implants beneath a fixed prosthesis. This study documents the results of 843 consecutive patients treated with 1,206 natural tooth/implant-supported prostheses utilizing 3,096 screw-fixed attachments. After 3 to 14 years in function, only 9 intrusion problems were noted. All problems were associated with fractured or lost screws. This report demonstrates the efficacy of such a treatment approach when a natural tooth/implant-supported fixed prosthesis is contemplated.

  11. On Schrödinger's bridge problem

    NASA Astrophysics Data System (ADS)

    Friedland, S.

    2017-11-01

    In the first part of this paper we generalize Georgiou-Pavon's result that a positive square matrix can be scaled uniquely to a column stochastic matrix which maps a given positive probability vector to another given positive probability vector. In the second part we prove that a positive quantum channel can be scaled to another positive quantum channel which maps a given positive definite density matrix to another given positive definite density matrix using Brouwer's fixed point theorem. This result proves the Georgiou-Pavon conjecture for two positive definite density matrices, made in their recent paper. We show that the fixed points are unique for certain pairs of positive definite density matrices. Bibliography: 15 titles.

  12. Non-scaling fixed field alternating gradient permanent magnet cancer therapy accelerator

    DOEpatents

    Trbojevic, Dejan

    2017-05-23

    A non-scaling fixed field alternating gradient accelerator includes a racetrack shape including a first straight section connected to a first arc section, the first arc section connected to a second straight section, the second straight section connected to a second arc section, and the second arc section connected to the first straight section; an matching cells configured to match particle orbits between the first straight section, the first arc section, the second straight section, and the second arc section. The accelerator includes the matching cells and an associated matching procedure enabling the particle orbits at varying energies between an arc section and a straight section in the racetrack shape.

  13. Planning alternative organizational frameworks for a large scale educational telecommunications system served by fixed/broadcast satellites

    NASA Technical Reports Server (NTRS)

    Walkmeyer, J.

    1973-01-01

    This memorandum explores a host of considerations meriting attention from those who are concerned with designing organizational structures for development and control of a large scale educational telecommunications system using satellites. Part of a broader investigation at Washington University into the potential uses of fixed/broadcast satellites in U.S. education, this study lays ground work for a later effort to spell out a small number of hypothetical organizational blueprints for such a system and for assessment of potential short and long term impacts. The memorandum consists of two main parts. Part A deals with subjects of system-wide concern, while Part B deals with matters related to specific system components.

  14. Nitrogen-fixing trees inhibit growth of regenerating Costa Rican rainforests.

    PubMed

    Taylor, Benton N; Chazdon, Robin L; Bachelot, Benedicte; Menge, Duncan N L

    2017-08-15

    More than half of the world's tropical forests are currently recovering from human land use, and this regenerating biomass now represents the largest carbon (C)-capturing potential on Earth. How quickly these forests regenerate is now a central concern for both conservation and global climate-modeling efforts. Symbiotic nitrogen-fixing trees are thought to provide much of the nitrogen (N) required to fuel tropical secondary regrowth and therefore to drive the rate of forest regeneration, yet we have a poor understanding of how these N fixers influence the trees around them. Do they promote forest growth, as expected if the new N they fix facilitates neighboring trees? Or do they suppress growth, as expected if competitive inhibition of their neighbors is strong? Using 17 consecutive years of data from tropical rainforest plots in Costa Rica that range from 10 y since abandonment to old-growth forest, we assessed how N fixers influenced the growth of forest stands and the demographic rates of neighboring trees. Surprisingly, we found no evidence that N fixers facilitate biomass regeneration in these forests. At the hectare scale, plots with more N-fixing trees grew slower. At the individual scale, N fixers inhibited their neighbors even more strongly than did nonfixing trees. These results provide strong evidence that N-fixing trees do not always serve the facilitative role to neighboring trees during tropical forest regeneration that is expected given their N inputs into these systems.

  15. H2, fixed architecture, control design for large scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1990-01-01

    The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.

  16. Two decades of progress in understanding and control of laser plasma instabilities in indirect drive inertial fusion

    DOE PAGES

    Montgomery, David S.

    2016-04-14

    Our understanding of laser-plasma instability (LPI) physics has improved dramatically over the past two decades through advancements in experimental techniques, diagnostics, and theoretical and modeling approaches. We have progressed from single-beam experiments—ns pulses with ~kJ energy incident on hundred-micron-scale target plasmas with ~keV electron temperatures—to ones involving nearly 2 MJ energy in 192 beams onto multi-mm-scale plasmas with temperatures ~4 keV. At the same time, we have also been able to use smaller-scale laser facilities to substantially improve our understanding of LPI physics and evaluate novel approaches to their control. These efforts have led to a change in paradigm formore » LPI research, ushering in an era of engineering LPI to accomplish specific objectives, from tuning capsule implosion symmetry to fixing nonlinear saturation of LPI processes at acceptable levels to enable the exploration of high energy density physics in novel plasma regimes. A tutorial is provided that reviews the progress in the field from the vantage of the foundational LPI experimental results. The pedagogical framework of the simplest models of LPI will be employed, but attention will also be paid to settings where more sophisticated models are needed to understand the observations. Prospects for the application of our improved understanding for inertial fusion (both indirect- and direct-drive) and other applications will also be discussed.« less

  17. Nanoimprint-Assisted Shear Exfoliation (NASE) for Producing Multilayer MoS2 Structures as Field-Effect Transistor Channel Arrays.

    PubMed

    Chen, Mikai; Nam, Hongsuk; Rokni, Hossein; Wi, Sungjin; Yoon, Jeong Seop; Chen, Pengyu; Kurabayashi, Katsuo; Lu, Wei; Liang, Xiaogan

    2015-09-22

    MoS2 and other semiconducting transition metal dichalcogenides (TMDCs) are of great interest due to their excellent physical properties and versatile chemistry. Although many recent research efforts have been directed to explore attractive properties associated with MoS2 monolayers, multilayer/few-layer MoS2 structures are indeed demanded by many practical scale-up device applications, because multilayer structures can provide sizable electronic/photonic state densities for driving upscalable electrical/optical signals. Currently there is a lack of processes capable of producing ordered, pristine multilayer structures of MoS2 (or other relevant TMDCs) with manufacturing-grade uniformity of thicknesses and electronic/photonic properties. In this article, we present a nanoimprint-based approach toward addressing this challenge. In this approach, termed as nanoimprint-assisted shear exfoliation (NASE), a prepatterned bulk MoS2 stamp is pressed into a polymeric fixing layer, and the imprinted MoS2 features are exfoliated along a shear direction. This shear exfoliation can significantly enhance the exfoliation efficiency and thickness uniformity of exfoliated flakes in comparison with previously reported exfoliation processes. Furthermore, we have preliminarily demonstrated the fabrication of multiple transistors and biosensors exhibiting excellent device-to-device performance consistency. Finally, we present a molecular dynamics modeling analysis of the scaling behavior of NASE. This work holds significant potential to leverage the superior properties of MoS2 and other emerging TMDCs for practical scale-up device applications.

  18. Early clinical effects of the Dynesys system plus transfacet decompression through the Wiltse approach for the treatment of lumbar degenerative diseases

    PubMed Central

    Liu, Chao; Wang, Lei; Tian, Ji-wei

    2014-01-01

    Background This study investigated early clinical effects of Dynesys system plus transfacet decompression through the Wiltse approach in treating lumbar degenerative diseases. Material/Methods 37 patients with lumbar degenerative disease were treated with the Dynesys system plus transfacet decompression through the Wiltse approach. Results Results showed that all patients healed from surgery without severe complications. The average follow-up time was 20 months (9–36 months). Visual Analogue Scale and Oswestry Disability Index scores decreased significantly after surgery and at the final follow-up. There was a significant difference in the height of the intervertebral space and intervertebral range of motion (ROM) at the stabilized segment, but no significant changes were seen at the adjacent segments. X-ray scans showed no instability, internal fixation loosening, breakage, or distortion in the follow-up. Conclusions The Dynesys system plus transfacet decompression through the Wiltse approach is a therapeutic option for mild lumbar degenerative disease. This method can retain the structure of the lumbar posterior complex and the motion of the fixed segment, reduce the incidence of low back pain, and decompress the nerve root. PMID:24859831

  19. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array—Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-01-01

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array—application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. PMID:28672813

  20. Interactions between moist heating and dynamics in atmospheric predictability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straus, D.M.; Huntley, M.A.

    1994-02-01

    The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less

  1. A Spaceborne Synthetic Aperture Radar Partial Fixed-Point Imaging System Using a Field- Programmable Gate Array-Application-Specific Integrated Circuit Hybrid Heterogeneous Parallel Acceleration Technique.

    PubMed

    Yang, Chen; Li, Bingyi; Chen, Liang; Wei, Chunpeng; Xie, Yizhuang; Chen, He; Yu, Wenyue

    2017-06-24

    With the development of satellite load technology and very large scale integrated (VLSI) circuit technology, onboard real-time synthetic aperture radar (SAR) imaging systems have become a solution for allowing rapid response to disasters. A key goal of the onboard SAR imaging system design is to achieve high real-time processing performance with severe size, weight, and power consumption constraints. In this paper, we analyse the computational burden of the commonly used chirp scaling (CS) SAR imaging algorithm. To reduce the system hardware cost, we propose a partial fixed-point processing scheme. The fast Fourier transform (FFT), which is the most computation-sensitive operation in the CS algorithm, is processed with fixed-point, while other operations are processed with single precision floating-point. With the proposed fixed-point processing error propagation model, the fixed-point processing word length is determined. The fidelity and accuracy relative to conventional ground-based software processors is verified by evaluating both the point target imaging quality and the actual scene imaging quality. As a proof of concept, a field- programmable gate array-application-specific integrated circuit (FPGA-ASIC) hybrid heterogeneous parallel accelerating architecture is designed and realized. The customized fixed-point FFT is implemented using the 130 nm complementary metal oxide semiconductor (CMOS) technology as a co-processor of the Xilinx xc6vlx760t FPGA. A single processing board requires 12 s and consumes 21 W to focus a 50-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384.

  2. 32 CFR 37.560 - Must I be able to estimate project expenditures precisely in order to justify use of a fixed...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Fixed-Support Or Expenditure-Based Approach § 37.560 Must...

  3. Horizontal Residual Mean Circulation: Evaluation of Spatial Correlations in Coarse Resolution Ocean Models

    NASA Astrophysics Data System (ADS)

    Li, Y.; McDougall, T. J.

    2016-02-01

    Coarse resolution ocean models lack knowledge of spatial correlations between variables on scales smaller than the grid scale. Some researchers have shown that these spatial correlations play a role in the poleward heat flux. In order to evaluate the poleward transport induced by the spatial correlations at a fixed horizontal position, an equation is obtained to calculate the approximate transport from velocity gradients. The equation involves two terms that can be added to the quasi-Stokes streamfunction (based on temporal correlations) to incorporate the contribution of spatial correlations. Moreover, these new terms do not need to be parameterized and is ready to be evaluated by using model data directly. In this study, data from a high resolution ocean model have been used to estimate the accuracy of this HRM approach for improving the horizontal property fluxes in coarse-resolution ocean models. A coarse grid is formed by sub-sampling and box-car averaging the fine grid scale. The transport calculated on the coarse grid is then compared to the transport on original high resolution grid scale accumulated over a corresponding number of grid boxes. The preliminary results have shown that the estimate on coarse resolution grids roughly match the corresponding transports on high resolution grids.

  4. Combining eddy-covariance and chamber measurements to determine the methane budget from a small, heterogeneous urban floodplain wetland park

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morin, T. H.; Bohrer, G.; Stefanik, K. C.

    Methane (CH 4) emissions and carbon uptake in temperate freshwater wetlands act in opposing directions in the context of global radiative forcing. Large uncertainties exist for the rates of CH 4 emissions making it difficult to determine the extent that CH 4 emissions counteract the carbon sequestration of wetlands. Urban temperate wetlands are typically small and feature highly heterogeneous land cover, posing an additional challenge to determining their CH 4 budget. The data analysis approach we introduce here combines two different CH 4 flux measurement techniques to overcome scale and heterogeneity problems and determine the overall CH 4 budget ofmore » a small, heterogeneous, urban wetland landscape. Temporally intermittent point measurements from non-steady-state chambers provided information about patch-level heterogeneity of fluxes, while continuous, high temporal resolution flux measurements using the eddy-covariance (EC) technique provided information about the temporal dynamics of the fluxes. Patch-level scaling parameterization was developed from the chamber data to scale eddy covariance data to a ‘fixed-frame’, which corrects for variability in the spatial coverage of the eddy covariance observation footprint at any single point in time. Finally, by combining two measurement techniques at different scales, we addressed shortcomings of both techniques with respect to heterogeneous wetland sites.« less

  5. Combining eddy-covariance and chamber measurements to determine the methane budget from a small, heterogeneous urban floodplain wetland park

    DOE PAGES

    Morin, T. H.; Bohrer, G.; Stefanik, K. C.; ...

    2017-02-17

    Methane (CH 4) emissions and carbon uptake in temperate freshwater wetlands act in opposing directions in the context of global radiative forcing. Large uncertainties exist for the rates of CH 4 emissions making it difficult to determine the extent that CH 4 emissions counteract the carbon sequestration of wetlands. Urban temperate wetlands are typically small and feature highly heterogeneous land cover, posing an additional challenge to determining their CH 4 budget. The data analysis approach we introduce here combines two different CH 4 flux measurement techniques to overcome scale and heterogeneity problems and determine the overall CH 4 budget ofmore » a small, heterogeneous, urban wetland landscape. Temporally intermittent point measurements from non-steady-state chambers provided information about patch-level heterogeneity of fluxes, while continuous, high temporal resolution flux measurements using the eddy-covariance (EC) technique provided information about the temporal dynamics of the fluxes. Patch-level scaling parameterization was developed from the chamber data to scale eddy covariance data to a ‘fixed-frame’, which corrects for variability in the spatial coverage of the eddy covariance observation footprint at any single point in time. Finally, by combining two measurement techniques at different scales, we addressed shortcomings of both techniques with respect to heterogeneous wetland sites.« less

  6. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    NASA Astrophysics Data System (ADS)

    Mantry, Sonny; Petriello, Frank

    2010-05-01

    We derive a factorization theorem for the Higgs boson transverse momentum (pT) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for mh≫pT≫ΛQCD, where mh denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the pT scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the pT-scale physics simplifies the implementation of higher order radiative corrections in αs(pT). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in pT/mh and ΛQCD/pT can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-pT resummation.

  7. Tracing the evolutionary path to nitrogen-fixing crops.

    PubMed

    Delaux, Pierre-Marc; Radhakrishnan, Guru; Oldroyd, Giles

    2015-08-01

    Nitrogen-fixing symbioses between plants and bacteria are restricted to a few plant lineages. The plant partner benefits from these associations by gaining access to the pool of atmospheric nitrogen. By contrast, other plant species, including all cereals, rely only on the scarce nitrogen present in the soil and what they can glean from associative bacteria. Global cereal yields from conventional agriculture are dependent on the application of massive levels of chemical fertilisers. Engineering nitrogen-fixing symbioses into cereal crops could in part mitigate the economic and ecological impacts caused by the overuse of fertilisers and provide better global parity in crop yields. Comparative phylogenetics and phylogenomics are powerful tools to identify genetic and genomic innovations behind key plant traits. In this review we highlight recent discoveries made using such approaches and we discuss how these approaches could be used to help direct the engineering of nitrogen-fixing symbioses into cereals. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Measured noise of a scale model high speed propeller at simulated takeoff/approach conditions

    NASA Technical Reports Server (NTRS)

    Woodward, Richard P.

    1987-01-01

    A model high-speed advanced propeller, SR-7A, was tested in the NASA Lewis 9x15 foot anechoic wind tunnel at simulated takeoff/approach conditions of 0.2 Mach number. These tests were in support of the full-scale Propfan Text Assessment (PTA) flight program. Acoustic measurements were taken with fixed microphone arrays and with an axially translating microphone probe. Limited aerodynamic measurements were also taken to establish the propeller operating conditions. Tests were conducted with the propeller alone and with three down-stream wing configurations. The propeller was run over a range of blade setting angles from 32.0 deg. to 43.6 deg., tip speeds from 183 to 290 m/sec (600 to 950 ft/sec), and angles of attack from -10 deg. to +15 deg. The propeller alone BPF tone noise was found to increase 10 dB in the flyover plane at 15 deg. propeller axis angle of attack. The installation of the straight wing at minimum spacing of 0.54 wing chord increased the tone noise 5 dB under the wing of 10 deg. propeller axis angle of attack, while a similarly spaced inboard upswept wing only increased the tone noise 2 dB.

  9. High-Resolution Air Pollution Mapping with Google Street View Cars: Exploiting Big Data.

    PubMed

    Apte, Joshua S; Messier, Kyle P; Gani, Shahzad; Brauer, Michael; Kirchstetter, Thomas W; Lunden, Melissa M; Marshall, Julian D; Portier, Christopher J; Vermeulen, Roel C H; Hamburg, Steven P

    2017-06-20

    Air pollution affects billions of people worldwide, yet ambient pollution measurements are limited for much of the world. Urban air pollution concentrations vary sharply over short distances (≪1 km) owing to unevenly distributed emission sources, dilution, and physicochemical transformations. Accordingly, even where present, conventional fixed-site pollution monitoring methods lack the spatial resolution needed to characterize heterogeneous human exposures and localized pollution hotspots. Here, we demonstrate a measurement approach to reveal urban air pollution patterns at 4-5 orders of magnitude greater spatial precision than possible with current central-site ambient monitoring. We equipped Google Street View vehicles with a fast-response pollution measurement platform and repeatedly sampled every street in a 30-km 2 area of Oakland, CA, developing the largest urban air quality data set of its type. Resulting maps of annual daytime NO, NO 2 , and black carbon at 30 m-scale reveal stable, persistent pollution patterns with surprisingly sharp small-scale variability attributable to local sources, up to 5-8× within individual city blocks. Since local variation in air quality profoundly impacts public health and environmental equity, our results have important implications for how air pollution is measured and managed. If validated elsewhere, this readily scalable measurement approach could address major air quality data gaps worldwide.

  10. Memory matters: influence from a cognitive map on animal space use.

    PubMed

    Gautestad, Arild O

    2011-10-21

    A vertebrate individual's cognitive map provides a capacity for site fidelity and long-distance returns to favorable patches. Fractal-geometrical analysis of individual space use based on collection of telemetry fixes makes it possible to verify the influence of a cognitive map on the spatial scatter of habitat use and also to what extent space use has been of a scale-specific versus a scale-free kind. This approach rests on a statistical mechanical level of system abstraction, where micro-scale details of behavioral interactions are coarse-grained to macro-scale observables like the fractal dimension of space use. In this manner, the magnitude of the fractal dimension becomes a proxy variable for distinguishing between main classes of habitat exploration and site fidelity, like memory-less (Markovian) Brownian motion and Levy walk and memory-enhanced space use like Multi-scaled Random Walk (MRW). In this paper previous analyses are extended by exploring MRW simulations under three scenarios: (1) central place foraging, (2) behavioral adaptation to resource depletion (avoidance of latest visited locations) and (3) transition from MRW towards Levy walk by narrowing memory capacity to a trailing time window. A generalized statistical-mechanical theory with the power to model cognitive map influence on individual space use will be important for statistical analyses of animal habitat preferences and the mechanics behind site fidelity and home ranges. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Ionospheric effects in uncalibrated phase delay estimation and ambiguity-fixed PPP based on raw observable model

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Shi, Chuang; Lou, Yidong; Liu, Jingnan

    2015-05-01

    Zero-difference (ZD) ambiguity resolution (AR) reveals the potential to further improve the performance of precise point positioning (PPP). Traditionally, PPP AR is achieved by Melbourne-Wübbena and ionosphere-free combinations in which the ionosphere effect are removed. To exploit the ionosphere characteristics, PPP AR with L1 and L2 raw observable has also been developed recently. In this study, we apply this new approach in uncalibrated phase delay (UPD) generation and ZD AR and compare it with the traditional model. The raw observable processing strategy treats each ionosphere delay as an unknown parameter. In this manner, both a priori ionosphere correction model and its spatio-temporal correlation can be employed as constraints to improve the ambiguity resolution. However, theoretical analysis indicates that for the wide-lane (WL) UPD retrieved from L1/L2 ambiguities to benefit from this raw observable approach, high precision ionosphere correction of better than 0.7 total electron content unit (TECU) is essential. This conclusion is then confirmed with over 1 year data collected at about 360 stations. Firstly, both global and regional ionosphere model were generated and evaluated, the results of which demonstrated that, for large-scale ionosphere modeling, only an accuracy of 3.9 TECU can be achieved on average for the vertical delays, and this accuracy can be improved to about 0.64 TECU when dense network is involved. Based on these ionosphere products, WL/narrow-lane (NL) UPDs are then extracted with the raw observable model. The NL ambiguity reveals a better stability and consistency compared to traditional approach. Nonetheless, the WL ambiguity can be hardly improved even constrained with the high spatio-temporal resolution ionospheric corrections. By applying both these approaches in PPP-RTK, it is interesting to find that the traditional model is more efficient in AR as evidenced by the shorter time to first fix, while the three-dimensional positioning accuracy of the RAW model outperforms the combination model by about . This reveals that, with the current ionosphere models, there is actually no optimal strategy for the dual-frequency ZD ambiguity resolution, and the combination approach and raw approach each has merits and demerits.

  12. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  13. Observing a light dark matter beam with neutrino experiments

    NASA Astrophysics Data System (ADS)

    Deniverville, Patrick; Pospelov, Maxim; Ritz, Adam

    2011-10-01

    We consider the sensitivity of fixed-target neutrino experiments at the luminosity frontier to light stable states, such as those present in models of MeV-scale dark matter. To ensure the correct thermal relic abundance, such states must annihilate via light mediators, which in turn provide an access portal for direct production in colliders or fixed targets. Indeed, this framework endows the neutrino beams produced at fixed-target facilities with a companion “dark matter beam,” which may be detected via an excess of elastic scattering events off electrons or nuclei in the (near-)detector. We study the high-luminosity proton fixed-target experiments at LSND and MiniBooNE, and determine that the ensuing sensitivity to light dark matter generally surpasses that of other direct probes. For scenarios with a kinetically-mixed U(1)' vector mediator of mass mV, we find that a large volume of parameter space is excluded for mDM˜1-5MeV, covering vector masses 2mDM≲mV≲mη and a range of kinetic mixing parameters reaching as low as κ˜10-5. The corresponding MeV-scale dark matter scenarios motivated by an explanation of the galactic 511 keV line are thus strongly constrained.

  14. Inconsistent Responding in a Criminal Forensic Setting: An Evaluation of the VRIN-r and TRIN-r Scales of the MMPI-2-RF.

    PubMed

    Gu, Wen; Reddy, Hima B; Green, Debbie; Belfi, Brian; Einzig, Shanah

    2017-01-01

    Criminal forensic evaluations are complicated by the risk that examinees will respond in an unreliable manner. Unreliable responding could occur due to lack of personal investment in the evaluation, severe mental illness, and low cognitive abilities. In this study, 31% of Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008/2011) profiles were invalid due to random or fixed-responding (T score ≥ 80 on the VRIN-r or TRIN-r scales) in a sample of pretrial criminal defendants evaluated in the context of treatment for competency restoration. Hierarchical regression models showed that symptom exaggeration variables, as measured by inconsistently reported psychiatric symptoms, contributed over and above education and intellectual functioning in their prediction of both random responding and fixed responding. Psychopathology variables, as measured by mood disturbance, better predicted fixed responding after controlling for estimates of cognitive abilities, but did not improve the prediction for random responding. These findings suggest that random responding and fixed responding are not only affected by education and intellectual functioning, but also by intentional exaggeration and aspects of psychopathology. Measures of intellectual functioning and effort and response style should be considered for administration in conjunction with self-report personality measures to rule out rival hypotheses of invalid profiles.

  15. Fat fractal scaling of drainage networks from a random spatial network model

    USGS Publications Warehouse

    Karlinger, Michael R.; Troutman, Brent M.

    1992-01-01

    An alternative quantification of the scaling properties of river channel networks is explored using a spatial network model. Whereas scaling descriptions of drainage networks previously have been presented using a fractal analysis primarily of the channel lengths, we illustrate the scaling of the surface area of the channels defining the network pattern with an exponent which is independent of the fractal dimension but not of the fractal nature of the network. The methodology presented is a fat fractal analysis in which the drainage basin minus the channel area is considered the fat fractal. Random channel networks within a fixed basin area are generated on grids of different scales. The sample channel networks generated by the model have a common outlet of fixed width and a rule of upstream channel narrowing specified by a diameter branching exponent using hydraulic and geomorphologic principles. Scaling exponents are computed for each sample network on a given grid size and are regressed against network magnitude. Results indicate that the size of the exponents are related to magnitude of the networks and generally decrease as network magnitude increases. Cases showing differences in scaling exponents with like magnitudes suggest a direction of future work regarding other topologic basin characteristics as potential explanatory variables.

  16. Impact of metal and ceramic fixed orthodontic appliances on judgments of beauty and other face-related attributes.

    PubMed

    Fonseca, Lílian Martins; Araújo, Telma Martins de; Santos, Aline Rôde; Faber, Jorge

    2014-02-01

    Physical attributes, behavior, and personal ornaments exert a direct influence on how a person's beauty and personality are judged. The aim of this study was to investigate how people who wear a fixed orthodontic appliance see themselves and are seen by others in social settings. A total of 60 adults evaluated their own smiling faces in 3 different scenarios: without a fixed orthodontic appliance, wearing a metal fixed orthodontic appliance, and wearing an esthetic fixed orthodontic appliance. Furthermore, 15 adult raters randomly assessed the same faces in standardized front-view facial photographs. Both the subjects and the raters answered a questionnaire in which they evaluated criteria on a numbered scale ranging from 0 to 10. The models judged their own beauty, and the raters assigned scores to beauty, age, intelligence, ridiculousness, extroversion, and success. The self-evaluations showed decreased beauty scores (P <0.0001) when a fixed orthodontic appliance, especially a metal one, was being worn. There was no statistically significant difference between the 3 situations in the 6 criteria analyzed. A fixed orthodontic appliance did not affect how personal attributes are assessed. However, fixed orthodontic appliances apparently changed the subjects' self-perceptions when they looked in the mirror. Copyright © 2014 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  17. Is nitrogen the next carbon?

    NASA Astrophysics Data System (ADS)

    Battye, William; Aneja, Viney P.; Schlesinger, William H.

    2017-09-01

    Just as carbon fueled the Industrial Revolution, nitrogen has fueled an Agricultural Revolution. The use of synthetic nitrogen fertilizers and the cultivation of nitrogen-fixing crops both expanded exponentially during the last century, with most of the increase occurring after 1960. As a result, the current flux of reactive, or fixed, nitrogen compounds to the biosphere due to human activities is roughly equivalent to the total flux of fixed nitrogen from all natural sources, both on land masses and in the world's oceans. Natural fluxes of fixed nitrogen are subject to very large uncertainties, but anthropogenic production of reactive nitrogen has increased almost fivefold in the last 60 years, and this rapid increase in anthropogenic fixed nitrogen has removed any uncertainty on the relative importance of anthropogenic fluxes to the natural budget. The increased use of nitrogen has been critical for increased crop yields and protein production needed to keep pace with the growing world population. However, similar to carbon, the release of fixed nitrogen into the natural environment is linked to adverse consequences at local, regional, and global scales. Anthropogenic contributions of fixed nitrogen continue to grow relative to the natural budget, with uncertain consequences.

  18. Scaling future tropical cyclone damage with global mean temperature

    NASA Astrophysics Data System (ADS)

    Geiger, T.; Bresch, D.; Frieler, K.

    2017-12-01

    Tropical cyclones (TC) are one of the most damaging natural hazards and severely affectmany countries around the globe each year. Their nominal impact is projected to increasesubstantially as the exposed coastal population grows, per capita income increases, andanthropogenic climate change manifests. The magnitude of this increase, however, variesacross regions and is obscured by the stochastic behaviour of TCs, so far impeding arigorous quantification of trends in TC damage with global mean temperature (GMT) rise. Here, we build on the large sample of spatially explicit TCs simulations generated withinISIMIP(2b) for 1) pre-industrial conditions, 2) the historical period, and 3) future projectionsunder RCP2.6 and RCP6.0 to estimate future TC damage assuming fixed present-daysocio-economic conditions or SSP-based future projections of population patterns andincome. Damage estimates will be based on region-specific empirical damage modelsderived from reported damages and accounting for regional characteristics of vulnerability.Different combinations of 1) socio-economic drivers with pre-industrial climate or 2) changingclimate with fixed socio-economic conditions will be used to derive functional relationshipsbetween regionally aggregated changes in damages on one hand and global meantemperature and socio-economic predictors on the other hand. The obtained region-specific scaling of future TC damage with GMT provides valuable inputfor IPCC's special report on the impacts of global warming of 1.5°C by quantifying theincremental changes in impact with global warming. The approach allows for an update ofdamage functions used in integrated assessment models, and contributes to assessing theadequateness of climate mitigation and adaptation strategies.

  19. Holography as a highly efficient renormalization group flow. I. Rephrasing gravity

    NASA Astrophysics Data System (ADS)

    Behr, Nicolas; Kuperstein, Stanislav; Mukhopadhyay, Ayan

    2016-07-01

    We investigate how the holographic correspondence can be reformulated as a generalization of Wilsonian renormalization group (RG) flow in a strongly interacting large-N quantum field theory. We first define a highly efficient RG flow as one in which the Ward identities related to local conservation of energy, momentum and charges preserve the same form at each scale. To achieve this, it is necessary to redefine the background metric and external sources at each scale as functionals of the effective single-trace operators. These redefinitions also absorb the contributions of the multitrace operators to these effective Ward identities. Thus, the background metric and external sources become effectively dynamical, reproducing the dual classical gravity equations in one higher dimension. Here, we focus on reconstructing the pure gravity sector as a highly efficient RG flow of the energy-momentum tensor operator, leaving the explicit constructive field theory approach for generating such RG flows to the second part of the work. We show that special symmetries of the highly efficient RG flows carry information through which we can decode the gauge fixing of bulk diffeomorphisms in the corresponding gravity equations. We also show that the highly efficient RG flow which reproduces a given classical gravity theory in a given gauge is unique provided the endpoint can be transformed to a nonrelativistic fixed point with a finite number of parameters under a universal rescaling. The results obtained here are used in the second part of this work, where we do an explicit field-theoretic construction of the RG flow and obtain the dual classical gravity theory.

  20. The effects of hillslope-scale variability in burn severity on post-fire sediment delivery

    NASA Astrophysics Data System (ADS)

    Quinn, Dylan; Brooks, Erin; Dobre, Mariana; Lew, Roger; Robichaud, Peter; Elliot, William

    2017-04-01

    With the increasing frequency of wildfire and the costs associated with managing the burned landscapes, there is an increasing need for decision support tools that can be used to assess the effectiveness of targeted post-fire management strategies. The susceptibility of landscapes to post-fire soil erosion and runoff have been closely linked with the severity of the wildfire. Wildfire severity maps are often spatial complex and largely dependent upon total vegetative biomass, fuel moisture patterns, direction of burn, wind patterns, and other factors. The decision to apply targeted treatment to a specific landscape and the amount of resources dedicated to treating a landscape should ideally be based on the potential for excessive sediment delivery from a particular hillslope. Recent work has suggested that the delivery of sediment to a downstream water body from a hillslope will be highly influenced by the distribution of wildfire severity across a hillslope and that models that do not capture this hillslope scale variability would not provide reliable sediment and runoff predictions. In this project we compare detailed (10 m) grid-based model predictions to lumped and semi-lumped hillslope approaches where hydrologic parameters are fixed based on hillslope scale averaging techniques. We use the watershed scale version of the process-based Watershed Erosion Prediction Projection (WEPP) model and its GIS interface, GeoWEPP, to simulate the fire impacts on runoff and sediment delivery using burn severity maps at a watershed scale. The flowpath option in WEPP allows for the most detail representation of wildfire severity patterns (10 m) but depending upon the size of the watershed, simulations are time consuming and computational demanding. The hillslope version is a simpler approach which assigns wildfire severity based on the severity level that is assigned to the majority of the hillslope area. In the third approach we divided hillslopes in overland flow elements (OFEs) and assigned representative input values on a finer scale within single hillslopes. Each of these approaches were compared for several large wildfires in the mountainous ranges of central Idaho, USA. Simulations indicated that predictions based on lumped hillslope modeling over-predict sediment transport by as much as 4.8x in areas of high to moderate burn severity. Annual sediment yield within the simulated watersheds ranged from 1.7 tonnes/ha to 6.8 tonnes/ha. The disparity between simulated sediment yield with these approaches was attributed to hydrologic connectivity of the burn patterns within the hillslope. High infiltration rates between high severity sites can greatly reduce the delivery of sediment. This research underlines the importance of accurately representing soil burn severity along individual hillslopes in hydrologic models and the need for modeling approaches to capture this variability to reliability simulate soil erosion.

  1. 32 CFR 37.565 - May I use a hybrid instrument that provides fixed support for only a portion of a project?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Defense OFFICE OF THE SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business Evaluation Fixed-Support Or Expenditure-Based Approach § 37.565 May I use a...

  2. Implementing system simulation of C3 systems using autonomous objects

    NASA Technical Reports Server (NTRS)

    Rogers, Ralph V.

    1987-01-01

    The basis of all conflict recognition in simulation is a common frame of reference. Synchronous discrete-event simulation relies on the fixed points in time as the basic frame of reference. Asynchronous discrete-event simulation relies on fixed-points in the model space as the basic frame of reference. Neither approach provides sufficient support for autonomous objects. The use of a spatial template as a frame of reference is proposed to address these insufficiencies. The concept of a spatial template is defined and an implementation approach offered. Discussed are the uses of this approach to analyze the integration of sensor data associated with Command, Control, and Communication systems.

  3. Optimal crop selection and water allocation under limited water supply in irrigation

    NASA Astrophysics Data System (ADS)

    Stange, Peter; Grießbach, Ulrike; Schütze, Niels

    2015-04-01

    Due to climate change, extreme weather conditions such as droughts may have an increasing impact on irrigated agriculture. To cope with limited water resources in irrigation systems, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand at the same time. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from optimized agronomic response on farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF). These functions take into account different soil types, crops and stochastically generated climate scenarios. The SCWPF's are used to compute the water demand considering different conditions, e.g., variable and fixed costs. This generic approach enables the consideration of both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance IRrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies.

  4. Quantum criticality of the two-channel pseudogap Anderson model: universal scaling in linear and non-linear conductance.

    PubMed

    Wu, Tsan-Pei; Wang, Xiao-Qun; Guo, Guang-Yu; Anders, Frithjof; Chung, Chung-Hou

    2016-05-05

    The quantum criticality of the two-lead two-channel pseudogap Anderson impurity model is studied. Based on the non-crossing approximation (NCA) and numerical renormalization group (NRG) approaches, we calculate both the linear and nonlinear conductance of the model at finite temperatures with a voltage bias and a power-law vanishing conduction electron density of states, ρc(ω) proportional |ω − μF|(r) (0 < r < 1) near the Fermi energy μF. At a fixed lead-impurity hybridization, a quantum phase transition from the two-channel Kondo (2CK) to the local moment (LM) phase is observed with increasing r from r = 0 to r = rc < 1. Surprisingly, in the 2CK phase, different power-law scalings from the well-known [Formula: see text] or [Formula: see text] form is found. Moreover, novel power-law scalings in conductances at the 2CK-LM quantum critical point are identified. Clear distinctions are found on the critical exponents between linear and non-linear conductance at criticality. The implications of these two distinct quantum critical properties for the non-equilibrium quantum criticality in general are discussed.

  5. Testing the paradigms of the glass transition in colloids

    NASA Astrophysics Data System (ADS)

    Zia, Roseanna; Wang, Jialun; Peng, Xiaoguang; Li, Qi; McKenna, Gregory

    2017-11-01

    Many molecular liquids freeze upon fast enough cooling. This so-called glass state is path dependent and out of equilibrium, as measured by the Kovacs signature experiments, i.e. intrinsic isotherms, asymmetry of approach and memory effect. The reasons for this path- and time-dependence are not fully understood, due to fast molecular relaxations. Colloids provide a natural way to model such behavior, owing to disparity in colloidal versus solvent time scales that can slow dynamics. To shed light on the ambiguity of glass transition, we study via large-scale dynamic simulation of hard-sphere colloidal glass after volume-fraction jumps, where particle size increases at fixed system volume followed by protocols of the McKenna-Kovacs signature experiments. During and following each jump, the positions, velocities, and particle-phase stress are tracked and utilized to characterize relaxation time scales. The impact of both quench depth and quench rate on arrested dynamics and ``state'' variables is explored. In addition, we expand our view to various structural signatures, and rearrangement mechanism is proposed. The results provide insight into not only the existence of an ``ideal'' glass transition, but also the role of structure in such a dense amorphous system.

  6. BENCH-SCALE EVALUATION OF CALCIUM SORBENTS FOR ACID GAS EMISSION CONTROL

    EPA Science Inventory

    Calcium sorbents for acid gas emission control were evaluated for effectiveness in removing SO2/HCl and SO2/NO from simulated incinerator and boiler flue gases. All tests were conducted in a bench-scale reactor (fixed-bed) simulating fabric filter conditions in an acid gas remova...

  7. Linking Item Parameters to a Base Scale

    ERIC Educational Resources Information Center

    Kang, Taehoon; Petersen, Nancy S.

    2012-01-01

    This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord in "Appl Psychol Measure"…

  8. Numerical and flight simulator test of the flight deterioration concept

    NASA Technical Reports Server (NTRS)

    Mccarthy, J.; Norviel, V.

    1982-01-01

    Manned flight simulator response to theoretical wind shear profiles was studied in an effort to calibrate fixed-stick and pilot-in-the-loop numerical models of jet transport aircraft on approach to landing. Results of the study indicate that both fixed-stick and pilot-in-the-loop models overpredict the deleterious effects of aircraft approaches when compared to pilot performance in the manned simulator. Although the pilot-in-the-loop model does a better job than does the fixed-stick model, the study suggests that the pilot-in-the-loop model is suitable for use in meteorological predictions of adverse low-level wind shear along approach and departure courses to identify situations in which pilots may find difficulty. The model should not be used to predict the success or failure of a specific aircraft. It is suggested that the pilot model be used as part of a ground-based Doppler radar low-level wind shear detection and warning system.

  9. Thermal Inspection of a Composite Fuselage Section Using a Fixed Eigenvector Principal Component Analysis Method

    NASA Technical Reports Server (NTRS)

    Zalameda, Joseph N.; Bolduc, Sean; Harman, Rebecca

    2017-01-01

    A composite fuselage aircraft forward section was inspected with flash thermography. The fuselage section is 24 feet long and approximately 8 feet in diameter. The structure is primarily configured with a composite sandwich structure of carbon fiber face sheets with a Nomex(Trademark) honeycomb core. The outer surface area was inspected. The thermal data consisted of 477 data sets totaling in size of over 227 Gigabytes. Principal component analysis (PCA) was used to process the data sets for substructure and defect detection. A fixed eigenvector approach using a global covariance matrix was used and compared to a varying eigenvector approach. The fixed eigenvector approach was demonstrated to be a practical analysis method for the detection and interpretation of various defects such as paint thickness variation, possible water intrusion damage, and delamination damage. In addition, inspection considerations are discussed including coordinate system layout, manipulation of the fuselage section, and the manual scanning technique used for full coverage.

  10. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  11. The succinonitrile triple-point standard: a fixed point to improve the accuracy of temperature measurements in the clinical laboratory.

    PubMed

    Mangum, B W

    1983-07-01

    In an investigation of the melting and freezing behavior of succinonitrile, the triple-point temperature was determined to be 58.0805 degrees C, with an estimated uncertainty of +/- 0.0015 degrees C relative to the International Practical Temperature Scale of 1968 (IPTS-68). The triple-point temperature of this material is evaluated as a temperature-fixed point, and some clinical laboratory applications of this fixed point are proposed. In conjunction with the gallium and ice points, the availability of succinonitrile permits thermistor thermometers to be calibrated accurately and easily on the IPTS-68.

  12. Fixed-Cell Imaging of Schizosaccharomyces pombe.

    PubMed

    Hagan, Iain M; Bagley, Steven

    2016-07-01

    The acknowledged genetic malleability of fission yeast has been matched by impressive cytology to drive major advances in our understanding of basic molecular cell biological processes. In many of the more recent studies, traditional approaches of fixation followed by processing to accommodate classical staining procedures have been superseded by live-cell imaging approaches that monitor the distribution of fusion proteins between a molecule of interest and a fluorescent protein. Although such live-cell imaging is uniquely informative for many questions, fixed-cell imaging remains the better option for others and is an important-sometimes critical-complement to the analysis of fluorescent fusion proteins by live-cell imaging. Here, we discuss the merits of fixed- and live-cell imaging as well as specific issues for fluorescence microscopy imaging of fission yeast. © 2016 Cold Spring Harbor Laboratory Press.

  13. A high-resolution peak fractionation approach for streamlined screening of nuclear-factor-E2-related factor-2 activators in Salvia miltiorrhiza.

    PubMed

    Zhang, Hui; Luo, Li-Ping; Song, Hui-Peng; Hao, Hai-Ping; Zhou, Ping; Qi, Lian-Wen; Li, Ping; Chen, Jun

    2014-01-24

    Generation of a high-purity fraction library for efficiently screening active compounds from natural products is challenging because of their chemical diversity and complex matrices. In this work, a strategy combining high-resolution peak fractionation (HRPF) with a cell-based assay was proposed for target screening of bioactive constituents from natural products. In this approach, peak fractionation was conducted under chromatographic conditions optimized for high-resolution separation of the natural product extract. The HRPF approach was automatically performed according to the predefinition of certain peaks based on their retention times from a reference chromatographic profile. The corresponding HRPF database was collected with a parallel mass spectrometer to ensure purity and characterize the structures of compounds in the various fractions. Using this approach, a set of 75 peak fractions on the microgram scale was generated from 4mg of the extract of Salvia miltiorrhiza. After screening by an ARE-luciferase reporter gene assay, 20 diterpene quinones were selected and identified, and 16 of these compounds were reported to possess novel Nrf2 activation activity. Compared with conventional fixed-time interval fractionation, the HRPF approach could significantly improve the efficiency of bioactive compound discovery and facilitate the uncovering of minor active components. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. A combined application of thermal desorber and gas chromatography to the analysis of gaseous carbonyls with the aid of two internal standards.

    PubMed

    Kim, Ki-Hyun; Anthwal, A; Pandey, Sudhir Kumar; Kabir, Ehsanul; Sohn, Jong Ryeul

    2010-11-01

    In this study, a series of GC calibration experiments were conducted to examine the feasibility of the thermal desorption approach for the quantification of five carbonyl compounds (acetaldehyde, propionaldehyde, butyraldehyde, isovaleraldehyde, and valeraldehyde) in conjunction with two internal standard compounds. The gaseous working standards of carbonyls were calibrated with the aid of thermal desorption as a function of standard concentration and of loading volume. The detection properties were then compared against two types of external calibration data sets derived by fixed standard volume and fixed standard concentration approach. According to this comparison, the fixed standard volume-based calibration of carbonyls should be more sensitive and reliable than its fixed standard concentration counterpart. Moreover, the use of internal standard can improve the analytical reliability of aromatics and some carbonyls to a considerable extent. Our preliminary test on real samples, however, indicates that the performance of internal calibration, when tested using samples of varying dilution ranges, can be moderately different from that derivable from standard gases. It thus suggests that the reliability of calibration approaches should be examined carefully with the considerations on the interactive relationships between the compound-specific properties and the operation conditions of the instrumental setups.

  15. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  17. Scaling behavior of ground-state energy cluster expansion for linear polyenes

    NASA Astrophysics Data System (ADS)

    Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.

    Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.

  18. An interactive governance and fish chain approach to fisheries rebuilding: a case study of the Northern Gulf cod in eastern Canada.

    PubMed

    Khan, Ahmed; Chuenpagdee, Ratana

    2014-09-01

    Rebuilding collapsed fisheries is a multifaceted problem, requiring a holistic governance approach rather than technical management fixes. Using the Northern Gulf cod case study in eastern Canada, we illustrate how a "fish chain" framework, drawn from the interactive governance perspective, is particularly helpful in analyzing rebuilding challenges. The analysis demonstrates that factors limiting rebuilding exist along the entire fish chain, i.e., the pre-harvest, harvest, and post-harvest stages. These challenges are embedded in both the ecological and social systems associated with the Northern Gulf cod fisheries, as well as in the governing systems. A comparative analysis of the pre- and post-collapse of the cod fisheries also reveals governance opportunities in rebuilding, which lie in policy interventions such as integrated and ecosystem-based management, livelihood transitional programs, and cross-scale institutional arrangements. Lessons from the Northern Gulf cod case study, especially the missed opportunities to explore alternative governing options during the transition, are valuable for rebuilding other collapsed fisheries.

  19. Reconciling charmonium production and polarization data in the midrapidity region at hadron colliders within the nonrelativistic QCD framework

    NASA Astrophysics Data System (ADS)

    Sun, Zhan; Zhang, Hong-Fei

    2018-04-01

    A thorough study reveals that the only key parameter for ψ (J/ψ, ψ‧) polarization at hadron colliders is the ratio < {O}\\psi {(}3{S}1[8])> /< {O}\\psi {(}3{P}0[8])> , if the velocity scaling rule holds. A slight variation of this parameter results in substantial change of the ψ polarization. We find that with equally good description of the yield data, this parameter can vary significantly. Fitting the yield data is therefore incapable of determining this parameter, and consequently, of determining the ψ polarization. We provide a universal approach to fixing the long-distance matrix elements (LDMEs) for J/ψ and ψ‧ production. Further, with the existing data, we implement this approach, obtain a favorable set of the LDMEs, and manage to reconcile the charmonia production and polarization experiments, except for two sets of CDF data on J/ψ polarization. Supported by National Natural Science Foundation of China (11405268, 11647113, 11705034)

  20. Spatio-temporal Bayesian model selection for disease mapping

    PubMed Central

    Carroll, R; Lawson, AB; Faes, C; Kirby, RS; Aregay, M; Watjou, K

    2016-01-01

    Spatio-temporal analysis of small area health data often involves choosing a fixed set of predictors prior to the final model fit. In this paper, we propose a spatio-temporal approach of Bayesian model selection to implement model selection for certain areas of the study region as well as certain years in the study time line. Here, we examine the usefulness of this approach by way of a large-scale simulation study accompanied by a case study. Our results suggest that a special case of the model selection methods, a mixture model allowing a weight parameter to indicate if the appropriate linear predictor is spatial, spatio-temporal, or a mixture of the two, offers the best option to fitting these spatio-temporal models. In addition, the case study illustrates the effectiveness of this mixture model within the model selection setting by easily accommodating lifestyle, socio-economic, and physical environmental variables to select a predominantly spatio-temporal linear predictor. PMID:28070156

  1. Distance-from-the-wall scaling of turbulent motions in wall-bounded flows

    NASA Astrophysics Data System (ADS)

    Baidya, R.; Philip, J.; Hutchins, N.; Monty, J. P.; Marusic, I.

    2017-02-01

    An assessment of self-similarity in the inertial sublayer is presented by considering the wall-normal velocity, in addition to the streamwise velocity component. The novelty of the current work lies in the inclusion of the second velocity component, made possible by carefully conducted subminiature ×-probe experiments to minimise the errors in measuring the wall-normal velocity. We show that not all turbulent stress quantities approach the self-similar asymptotic state at an equal rate as the Reynolds number is increased, with the Reynolds shear stress approaching faster than the streamwise normal stress. These trends are explained by the contributions from attached eddies. Furthermore, the Reynolds shear stress cospectra, through its scaling with the distance from the wall, are used to assess the wall-normal limits where self-similarity applies within the wall-bounded flow. The results are found to be consistent with the recent prediction from the work of Wei et al. ["Properties of the mean momentum balance in turbulent boundary layer, pipe and channel flows," J. Fluid Mech. 522, 303-327 (2005)], Klewicki ["Reynolds number dependence, scaling, and dynamics of turbulent boundary layers," J. Fluids Eng. 132, 094001 (2010)], and others that the self-similar region starts and ends at z+˜O (√{δ+}) and O (δ+) , respectively. Below the self-similar region, empirical evidence suggests that eddies responsible for turbulent stresses begin to exhibit distance-from-the-wall scaling at a fixed z+ location; however, they are distorted by viscous forces, which remain a leading order contribution in the mean momentum balance in the region z+≲O (√{δ+}) , and thus result in a departure from self-similarity.

  2. Transient ensemble dynamics in time-independent galactic potentials

    NASA Astrophysics Data System (ADS)

    Mahon, M. Elaine; Abernathy, Robert A.; Bradley, Brendan O.; Kandrup, Henry E.

    1995-07-01

    This paper summarizes a numerical investigation of the short-time, possibly transient, behaviour of ensembles of stochastic orbits evolving in fixed non-integrable potentials, with the aim of deriving insights into the structure and evolution of galaxies. The simulations involved three different two-dimensional potentials, quite different in appearance. However, despite these differences, ensembles in all three potentials exhibit similar behaviour. This suggests that the conclusions inferred from the simulations are robust, relying only on basic topological properties, e.g., the existence of KAM tori and cantori. Generic ensembles of initial conditions, corresponding to stochastic orbits, exhibit a rapid coarse-grained approach towards a near-invariant distribution on a time-scale <>t_H, although various irregularities associated with external and/or internal irregularities can drastically accelerate this process. A principal tool in the analysis is the notion of a local Liapounov exponent, which provides a statistical characterization of the overall instability of stochastic orbits over finite time intervals. In particular, there is a precise sense in which confined stochastic orbits are less unstable, with smaller local Liapounov exponents, than are unconfined stochastic orbits.

  3. Severe fixed cervical kyphosis treated with circumferential osteotomy and pedicle screw fixation using an anterior-posterior-anterior surgical sequence.

    PubMed

    Yoshihara, Hiroyuki; Abumi, Kuniyoshi; Ito, Manabu; Kotani, Yoshihisa; Sudo, Hideki; Takahata, Masahiko

    2013-11-01

    Surgical treatment for severe circumferentially fixed cervical kyphosis has been challenging. Both anterior and posterior releases are necessary to provide the cervical mobility necessary for fusion in a corrected position. In two case reports, we describe the circumferential osteotomy of anterior-posterior-anterior surgical sequence, and the efficacy of this technique when cervical pedicle screw fixation for severe fixed cervical kyphosis is used. Etiology of fixed cervical kyphosis was unknown in one patient and neurofibromatosis in one patient. Both patients had severe fixed cervical kyphosis as determined by cervical radiographs and underwent circumferential osteotomy and fixation via an anterior-posterior-anterior surgical sequence and correction of kyphosis by pedicle screw fixation. Severe fixed cervical kyphosis was treated successfully by the use of circumferential osteotomy and pedicle screw fixation. The surgical sequence described in this report is a reasonable approach for severe circumferentially fixed cervical kyphosis and short segment fixation can be achieved using pedicle screws. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Low-dose fixed-target serial synchrotron crystallography.

    PubMed

    Owen, Robin L; Axford, Danny; Sherrell, Darren A; Kuo, Anling; Ernst, Oliver P; Schulz, Eike C; Miller, R J Dwayne; Mueller-Werkmeister, Henrike M

    2017-04-01

    The development of serial crystallography has been driven by the sample requirements imposed by X-ray free-electron lasers. Serial techniques are now being exploited at synchrotrons. Using a fixed-target approach to high-throughput serial sampling, it is demonstrated that high-quality data can be collected from myoglobin crystals, allowing room-temperature, low-dose structure determination. The combination of fixed-target arrays and a fast, accurate translation system allows high-throughput serial data collection at high hit rates and with low sample consumption.

  5. Automated system for measuring temperature profiles inside ITS-90 fixed-point cells

    NASA Astrophysics Data System (ADS)

    Hiti, Miha; Bojkovski, Jovan; Batagelj, Valentin; Drnovsek, Janko

    2005-11-01

    The defining fixed points of the International Temperature Scale of 1990 (ITS-90) are temperature reference points for temperature calibration. The measured temperature inside the fixed-point cell depends on thermometer immersion, since measurements are made below the surface of the fixed-point material and the additional effect of the hydrostatic pressure has to be taken into account. Also, the heat flux along the thermometer stem can affect the measured temperature. The paper presents a system that enables accurate and reproducible immersion profile measurements for evaluation of measurement sensitivity and adequacy of thermometer immersion. It makes immersion profile measurements possible, where a great number of repetitions and long measurement periods are required, and reduces the workload on the user for performing such measurements. The system is flexible and portable and was developed for application to existing equipment in the laboratory. Results of immersion profile measurements in a triple point of water fixed-point cell are presented.

  6. Comparison of human driver dynamics in simulators with complex and simple visual displays and in an automobile on the road

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Klein, R. H.

    1975-01-01

    As part of a comprehensive program exploring driver/vehicle system response in lateral steering tasks, driver/vehicle system describing functions and other dynamic data have been gathered in several milieu. These include a simple fixed base simulator with an elementary roadway delineation only display; a fixed base statically operating automobile with a terrain model based, wide angle projection system display; and a full scale moving base automobile operating on the road. Dynamic data with the two fixed base simulators compared favorably, implying that the impoverished visual scene, lack of engine noise, and simplified steering wheel feel characteristics in the simple simulator did not induce significant driver dynamic behavior variations. The fixed base vs. moving base comparisons showed substantially greater crossover frequencies and phase margins on the road course.

  7. Global nitrogen overload problem grows critical

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moffat, A.S.

    1998-02-13

    This article discusses a global problem due to man`s intervention in the biosphere resulting from an increased production and usage of products producing nitrogen compounds which can be fixed in ecosystems. This problem was recognized on small scales even in the 1960`s, but recent studies on a more global scale show that the amount of nitrogen compounds in river runoff is strongly related to the use of synthetic fertilizers, fossil-fuel power plants, and automobile emissions. The increased fixed nitrogen load is exceeding the ability of some ecosystems to use or break the compounds down, resulting in a change in themore » types of flora and fauna which are found to inhabit the ecosystems, and leading to decreased biodiversity.« less

  8. Validation of the "Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)" scale for wearers of implant-supported fixed partial dentures.

    PubMed

    Castillo-Oyagüe, Raquel; Perea, Carmen; Suárez-García, María-Jesús; Río, Jaime Del; Lynch, Christopher D; Preciado, Arelis

    2016-12-01

    To validate the 'Quality of Life related to function, aesthetics, socialization, and thoughts about health-behavioural habits (QoLFAST-10)' questionnaire for assessing the whole concept of oral health-related quality of life (OHRQoL) of implant-supported fixed partial denture (FPD) wearers. 107 patients were assigned to: Group 1 (HP; n=37): fixed-detachable hybrid prostheses (control); Group 2 (C-PD, n=35): cemented partial dentures; and Group 3 (S-PD, n=35): screwed partial dentures. Patients answered the QoLFAST-10 and the Oral Health Impact Profile (OHIP-14sp) scales. Information on global oral satisfaction, socio-demographic, prosthetic, and clinical data was gathered. The psychometric capacity of the QoLFAST-10 was investigated. The correlations between both indices were explored by the Spearman's rank test. The effect of the study variables on the OHRQoL was evaluated by descriptive and non-parametric probes (α=0.05). The QoLFAST-10 was reliable and valid for implant-supported FPD wearers, who attained comparable results regardless of the connection system being cement or screws. Both fixed partial groups demonstrated significantly better social, functional, and total satisfaction than did HP wearers with this index. All groups revealed similar aesthetic-related well-being and consciousness about the importance of health-behavioural habits. Several study variables modulated the QoLFAST-10 scores. Hybrid prostheses represent the least predictable treatment option, while cemented and screwed FPDs supplied equal OHRQoL as estimated by the QoLFAST-10 scale. The selection of cemented or screwed FPDs should mainly rely on clinical factors, since no differences in patient satisfaction may be expected between both types of implant rehabilitations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  10. Scaling of Device Variability and Subthreshold Swing in Ballistic Carbon Nanotube Transistors

    NASA Astrophysics Data System (ADS)

    Cao, Qing; Tersoff, Jerry; Han, Shu-Jen; Penumatcha, Ashish V.

    2015-08-01

    In field-effect transistors, the inherent randomness of dopants and other charges is a major cause of device-to-device variability. For a quasi-one-dimensional device such as carbon nanotube transistors, even a single charge can drastically change the performance, making this a critical issue for their adoption as a practical technology. Here we calculate the effect of the random charges at the gate-oxide surface in ballistic carbon nanotube transistors, finding good agreement with the variability statistics in recent experiments. A combination of experimental and simulation results further reveals that these random charges are also a major factor limiting the subthreshold swing for nanotube transistors fabricated on thin gate dielectrics. We then establish that the scaling of the nanotube device uniformity with the gate dielectric, fixed-charge density, and device dimension is qualitatively different from conventional silicon transistors, reflecting the very different device physics of a ballistic transistor with a quasi-one-dimensional channel. The combination of gate-oxide scaling and improved control of fixed-charge density should provide the uniformity needed for large-scale integration of such novel one-dimensional transistors even at extremely scaled device dimensions.

  11. Learning by Doing, Scale Effects, or Neither? Cardiac Surgeons after Residency

    PubMed Central

    Huesch, Marco D

    2009-01-01

    Objective To examine impacts of operating surgeon scale and cumulative experience on postoperative outcomes for patients treated with coronary artery bypass grafts (CABG) by “new” surgeons. Pooled linear, fixed effects panel, and instrumented regressions were estimated. Data Sources The administrative data included comorbidities, procedures, and outcomes for 19,978 adult CABG patients in Florida in 1998–2006, and public data on 57 cardiac surgeons who completed residencies after 1997. Study Design Analysis was at the patient level. Controls for risk, hospital scale and scope, and operating surgeon characteristics were made. Patient choice model instruments were constructed. Experience was estimated allowing for “forgetting” effects. Principal Findings Panel regressions with surgeon fixed effects showed neither surgeon scale nor cumulative volumes significantly impacted mortality nor consistently impacted morbidity. Estimation of “forgetting” suggests that almost all prior experience is depreciated from one quarter to the next. Instruments were strong, but exogeneity of volume was not rejected. Conclusions In postresidency surgeons, no persuasive evidence is found for learning by doing, scale, or selection effects. More research is needed to support the cautious view that, for these “new” cardiac surgeons, patient volume could be redistributed based on realized outcomes without disruption. PMID:19732169

  12. The carbon isotopic composition of ecosystem breath

    NASA Astrophysics Data System (ADS)

    Ehleringer, J.

    2008-05-01

    At the global scale, there are repeatable annual fluctuations in the concentration and isotopic composition of atmospheric carbon dioxide, sometimes referred to as the "breathing of the planet". Vegetation components within ecosystems fix carbon dioxide through photosynthesis into stable organic compounds; simultaneously both vegetation and heterotrophic components of the ecosystem release previously fixed carbon as respiration. These two-way fluxes influencing carbon dioxide exchange between the biosphere and the atmosphere impact both the concentration and isotopic composition of carbon dioxide within the convective boundary layer. Over space, the compounding effects of gas exchange activities from ecosystems become reflected in both regional and global changes in the concentration and isotopic composition of atmospheric carbon dioxide. When these two parameters are plotted against each other, there are significant linear relationships between the carbon isotopic composition and inverse concentration of atmospheric carbon dioxide. At the ecosystem scale, these "Keeling plots" intercepts of C3-dominated ecosystems describe the carbon isotope ratio of biospheric gas exchange. Using Farquhar's model, these carbon isotope values can be translated into quantitative measures of the drought-dependent control of photosynthesis by stomata as water availability changes through time. This approach is useful in aggregating the influences of drought across regional landscapes as it provides a quantitative measure of stomatal influence on photosynthetic gas exchange at the ecosystem-to-region scales. Multi-year analyses of the drought-dependent trends across terrestrial ecosystems show a repeated pattern with water stress in all but one C3-ecosystem type. Ecosystems that are dominated by ring-porous trees appear not to exhibit a dynamic stomatal response to water stress and therefore, there is little dependence of the carbon isotope ratio of gas exchange on site water balance. The mechanistic basis for this pattern is defined; the implications of climate change on ring-porous versus diffuse-porous vegetation and therefore on future atmospheric carbon dioxide isotope-concentration patterns is discussed.

  13. Synthesis and review: Tackling the nitrogen management challenge: from global to local scales

    NASA Astrophysics Data System (ADS)

    Reis, Stefan; Bekunda, Mateete; Howard, Clare M.; Karanja, Nancy; Winiwarter, Wilfried; Yan, Xiaoyuan; Bleeker, Albert; Sutton, Mark A.

    2016-12-01

    One of the ‘grand challenges’ of this age is the anthropogenic impact exerted on the nitrogen cycle. Issues of concern range from an excess of fixed nitrogen resulting in environmental pressures for some regions, while for other regions insufficient fixed nitrogen affects food security and may lead to health risks. To address these issues, nitrogen needs to be managed in an integrated fashion, at a variety of scales (from global to local). Such management has to be based on a thorough understanding of the sources of reactive nitrogen released into the environment, its deposition and effects. This requires a comprehensive assessment of the key drivers of changes in the nitrogen cycle both spatially, at the field, regional and global scale and over time. In this focus issue, we address the challenges of managing reactive nitrogen in the context of food production and its impacts on human and ecosystem health. In addition, we discuss the scope for and design of management approaches in regions with too much and too little nitrogen. This focus issue includes several contributions from authors who participated at the N2013 conference in Kampala in November 2013, where delegates compiled and agreed upon the ‘Kampala Statement-for-Action on Reactive Nitrogen in Africa and Globally’. These contributions further underline scientifically the claims of the ‘Kampala Statement’, that simultaneously reducing pollution and increasing nitrogen available in the food system, by improved nitrogen management offers win-wins for environment, health and food security in both developing and developed economies. The specific messages conveyed in the Kampala Statement focus on improving nitrogen management (I), including the reduction of nitrogen losses from agriculture, industry, transport and energy sectors, as well as improving waste treatment and informing individuals and institutions (II). Highlighting the need for innovation and increased awareness among stakeholders (III) and the identification of policy and technology solutions to tackle global nitrogen management issues (IV), this will enable countries to fulfil their regional and global commitments.

  14. Halo Histories vs. Galaxy Properties at z=0, III: The Properties of Star-Forming Galaxies

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.; Hahn, ChangHoon; Mao, Yao-Yuan; Wetzel, Andrew R.

    2018-05-01

    We measure how the properties of star-forming central galaxies correlate with large-scale environment, δ, measured on 10 h-1Mpc scales. We use galaxy group catalogs to isolate a robust sample of central galaxies with high purity and completeness. The galaxy properties we investigate are star formation rate (SFR), exponential disk scale length Rexp, and Sersic index of the galaxy light profile, nS. We find that, at all stellar masses, there is an inverse correlation between SFR and δ, meaning that above-average star forming centrals live in underdense regions. For nS and Rexp, there is no correlation with δ at M_\\ast ≲ 10^{10.5} M⊙, but at higher masses there are positive correlations; a weak correlation with Rexp and a strong correlation with nS. These data are evidence of assembly bias within the star-forming population. The results for SFR are consistent with a model in which SFR correlates with present-day halo accretion rate, \\dot{M}_h. In this model, galaxies are assigned to halos using the abundance matching ansatz, which maps galaxy stellar mass onto halo mass. At fixed halo mass, SFR is then assigned to galaxies using the same approach, but \\dot{M}_h is used to map onto SFR. The best-fit model requires some scatter in the \\dot{M}_h-SFR relation. The Rexp and nS measurements are consistent with a model in which both of these quantities are correlated with the spin parameter of the halo, λ. Halo spin does not correlate with δ at low halo masses, but for higher mass halos, high-spin halos live in higher density environments at fixed Mh. Put together with the earlier installments of this series, these data demonstrate that quenching processes have limited correlation with halo formation history, but the growth of active galaxies, as well as other detailed galaxies properties, are influenced by the details of halo assembly.

  15. A comprehensive approach to reactive power scheduling in restructured power systems

    NASA Astrophysics Data System (ADS)

    Shukla, Meera

    Financial constraints, regulatory pressure, and need for more economical power transfers have increased the loading of interconnected transmission systems. As a consequence, power systems have been operated close to their maximum power transfer capability limits, making the system more vulnerable to voltage instability events. The problem of voltage collapse characterized by a severe local voltage depression is generally believed to be associated with inadequate VAr support at key buses. The goal of reactive power planning is to maintain a high level of voltage security, through installation of properly sized and located reactive sources and their optimal scheduling. In case of vertically-operated power systems, the reactive requirement of the system is normally satisfied by using all of its reactive sources. But in case of different scenarios of restructured power systems, one may consider a fixed amount of exchange of reactive power through tie lines. Reviewed literature suggests a need for optimal scheduling of reactive power generation for fixed inter area reactive power exchange. The present work proposed a novel approach for reactive power source placement and a novel approach for its scheduling. The VAr source placement technique was based on the property of system connectivity. This is followed by development of optimal reactive power dispatch formulation which facilitated fixed inter area tie line reactive power exchange. This formulation used a Line Flow-Based (LFB) model of power flow analysis. The formulation determined the generation schedule for fixed inter area tie line reactive power exchange. Different operating scenarios were studied to analyze the impact of VAr management approach for vertically operated and restructured power systems. The system loadability, losses, generation and the cost of generation were the performance measures to study the impact of VAr management strategy. The novel approach was demonstrated on IEEE 30 bus system.

  16. Identifying and Mitigating the Impact of the Budget Control Act on High Risk Sectors and Tiers of the Defense Industrial Base: Assessment Approach to Industrial Base Risks

    DTIC Science & Technology

    2016-04-30

    Ü~åÖÉ= - 351 - products, similar to those found in a bill of material. Figure 3 provides an example of the relationship between sectors , sub- sectors ...defense aircraft. Defense aircraft are divided in three main sub- sectors : fixed-wing, rotary wing, and unmanned systems. The fixed-wing sub- sector ...Risk Sectors and Tiers of the Defense Industrial Base: Assessment Approach to Industrial Base Risks Lirio Avilés, Engineer, MIBP, OUSD(AT&L) Sally

  17. Scaling of Sediment Dynamics in a Reach-Scale Laboratory Model of a Sand-Bed Stream with Riparian Vegetation

    NASA Astrophysics Data System (ADS)

    Gorrick, S.; Rodriguez, J. F.

    2011-12-01

    A movable bed physical model was designed in a laboratory flume to simulate both bed and suspended load transport in a mildly sinuous sand-bed stream. Model simulations investigated the impact of different vegetation arrangements along the outer bank to evaluate rehabilitation options. Preserving similitude in the 1:16 laboratory model was very important. In this presentation the scaling approach, as well as the successes and challenges of the strategy are outlined. Firstly a near-bankfull flow event was chosen for laboratory simulation. In nature, bankfull events at the field site deposit new in-channel features but cause only small amounts of bank erosion. Thus the fixed banks in the model were not a drastic simplification. Next, and as in other studies, the flow velocity and turbulence measurements were collected in separate fixed bed experiments. The scaling of flow in these experiments was simply maintained by matching the Froude number and roughness levels. The subsequent movable bed experiments were then conducted under similar hydrodynamic conditions. In nature, the sand-bed stream is fairly typical; in high flows most sediment transport occurs in suspension and migrating dunes cover the bed. To achieve similar dynamics in the model equivalent values of the dimensionless bed shear stress and the particle Reynolds number were important. Close values of the two dimensionless numbers were achieved with lightweight sediments (R=0.3) including coal and apricot pips with a particle size distribution similar to that of the field site. Overall the moveable bed experiments were able to replicate the dominant sediment dynamics present in the stream during a bankfull flow and yielded relevant information for the analysis of the effects of riparian vegetation. There was a potential conflict in the strategy, in that grain roughness was exaggerated with respect to nature. The advantage of this strategy is that although grain roughness is exaggerated, the similarity of bedforms and resulting drag can return similar levels of roughness to those in the field site.

  18. Concentration and saturation effects of tethered polymer chains on adsorbing surfaces

    NASA Astrophysics Data System (ADS)

    Descas, Radu; Sommer, Jens-Uwe; Blumen, Alexander

    2006-12-01

    We consider end-grafted chains at an adsorbing surface under good solvent conditions using Monte Carlo simulations and scaling arguments. Grafting of chains allows us to fix the surface concentration and to study a wide range of surface concentrations from the undersaturated state of the surface up to the brushlike regime. The average extension of single chains in the direction parallel and perpendicular to the surface is analyzed using scaling arguments for the two-dimensional semidilute surface state according to Bouchaud and Daoud [J. Phys. (Paris) 48, 1991 (1987)]. We find good agreement with the scaling predictions for the scaling in the direction parallel to the surface and for surface concentrations much below the saturation concentration (dense packing of adsorption blobs). Increasing the grafting density we study the saturation effects and the oversaturation of the adsorption layer. In order to account for the effect of excluded volume on the adsorption free energy we introduce a new scaling variable related with the saturation concentration of the adsorption layer (saturation scaling). We show that the decrease of the single chain order parameter (the fraction of adsorbed monomers on the surface) with increasing concentration, being constant in the ideal semidilute surface state, is properly described by saturation scaling only. Furthermore, the simulation results for the chains' extension from higher surface concentrations up to the oversaturated state support the new scaling approach. The oversaturated state can be understood using a geometrical model which assumes a brushlike layer on top of a saturated adsorption layer. We provide evidence that adsorbed polymer layers are very sensitive to saturation effects, which start to influence the semidilute surface scaling even much below the saturation threshold.

  19. Linking Item Parameters to a Base Scale. ACT Research Report Series, 2009-2

    ERIC Educational Resources Information Center

    Kang, Taehoon; Petersen, Nancy S.

    2009-01-01

    This paper compares three methods of item calibration--concurrent calibration, separate calibration with linking, and fixed item parameter calibration--that are frequently used for linking item parameters to a base scale. Concurrent and separate calibrations were implemented using BILOG-MG. The Stocking and Lord (1983) characteristic curve method…

  20. Temperature Scales: Celsius, Fahrenheit, Kelvin, Reamur, and Romer.

    ERIC Educational Resources Information Center

    Romer, Robert H.

    1982-01-01

    Traces the history and development of temperature scales which began with the 17th-century invention of the liquid-in-glass thermometer. Focuses on the work of Olaf Romer, Daniel Fahrenheit, Rene-Antoine de Reamur, Anders Celsius, and William Thomson (Lord Kelvin). Includes experimental work and consideration of high/low fixed points on the…

  1. Effects of random initial conditions on the dynamical scaling behaviors of a fixed-energy Manna sandpile model in one dimension

    NASA Astrophysics Data System (ADS)

    Kwon, Sungchul; Kim, Jin Min

    2015-01-01

    For a fixed-energy (FE) Manna sandpile model in one dimension, we investigate the effects of random initial conditions on the dynamical scaling behavior of an order parameter. In the FE Manna model, the density ρ of total particles is conserved, and an absorbing phase transition occurs at ρc as ρ varies. In this work, we show that, for a given ρ , random initial distributions of particles lead to the domain structure in which domains with particle densities higher and lower than ρc alternate with each other. In the domain structure, the dominant length scale is the average domain length, which increases via the coalescence of adjacent domains. At ρc, the domain structure slows down the decay of an order parameter and also causes anomalous finite-size effects, i.e., power-law decay followed by an exponential one before the quasisteady state. As a result, the interplay of particle conservation and random initial conditions causes the domain structure, which is the origin of the anomalous dynamical scaling behaviors for random initial conditions.

  2. Practical experience with full-scale structured sheet media (SSM) integrated fixed-film activated sludge (IFAS) systems for nitrification.

    PubMed

    Li, Hua; Zhu, Jia; Flamming, James J; O'Connell, Jack; Shrader, Michael

    2015-01-01

    Many wastewater treatment plants in the USA, which were originally designed as secondary treatment systems with no or partial nitrification requirements, are facing increased flows, loads, and more stringent ammonia discharge limits. Plant expansion is often not cost-effective due to either high construction costs or lack of land. Under these circumstances, integrated fixed-film activated sludge (IFAS) systems using both suspended growth and biofilms that grow attached to a fixed plastic structured sheet media are found to be a viable solution for solving the challenges. Multiple plants have been retrofitted with such IFAS systems in the past few years. The system has proven to be efficient and reliable in achieving not only consistent nitrification, but also enhanced bio-chemical oxygen demand removal and sludge settling characteristics. This paper presents long-term practical experiences with the IFAS system design, operation and maintenance, and performance for three full-scale plants with distinct processes; that is, a trickling filter/solids contact process, a conventional plug flow activated sludge process and an extended aeration process.

  3. Spatiotemporal Determinants of Urban Leptospirosis Transmission: Four-Year Prospective Cohort Study of Slum Residents in Brazil

    PubMed Central

    Hagan, José E.; Moraga, Paula; Costa, Federico; Capian, Nicolas; Ribeiro, Guilherme S.; Wunder, Elsio A.; Felzemburgh, Ridalva D. M.; Reis, Renato B.; Nery, Nivison; Santana, Francisco S.; Fraga, Deborah; dos Santos, Balbino L.; Santos, Andréia C.; Queiroz, Adriano; Tassinari, Wagner; Carvalho, Marilia S.; Reis, Mitermayer G.; Diggle, Peter J.; Ko, Albert I.

    2016-01-01

    Background Rat-borne leptospirosis is an emerging zoonotic disease in urban slum settlements for which there are no adequate control measures. The challenge in elucidating risk factors and informing approaches for prevention is the complex and heterogeneous environment within slums, which vary at fine spatial scales and influence transmission of the bacterial agent. Methodology/Principal Findings We performed a prospective study of 2,003 slum residents in the city of Salvador, Brazil during a four-year period (2003–2007) and used a spatiotemporal modelling approach to delineate the dynamics of leptospiral transmission. Household interviews and Geographical Information System surveys were performed annually to evaluate risk exposures and environmental transmission sources. We completed annual serosurveys to ascertain leptospiral infection based on serological evidence. Among the 1,730 (86%) individuals who completed at least one year of follow-up, the infection rate was 35.4 (95% CI, 30.7–40.6) per 1,000 annual follow-up events. Male gender, illiteracy, and age were independently associated with infection risk. Environmental risk factors included rat infestation (OR 1.46, 95% CI, 1.00–2.16), contact with mud (OR 1.57, 95% CI 1.17–2.17) and lower household elevation (OR 0.92 per 10m increase in elevation, 95% CI 0.82–1.04). The spatial distribution of infection risk was highly heterogeneous and varied across small scales. Fixed effects in the spatiotemporal model accounted for the majority of the spatial variation in risk, but there was a significant residual component that was best explained by the spatial random effect. Although infection risk varied between years, the spatial distribution of risk associated with fixed and random effects did not vary temporally. Specific “hot-spots” consistently had higher transmission risk during study years. Conclusions/Significance The risk for leptospiral infection in urban slums is determined in large part by structural features, both social and environmental. Our findings indicate that topographic factors such as household elevation and inadequate drainage increase risk by promoting contact with mud and suggest that the soil-water interface serves as the environmental reservoir for spillover transmission. The use of a spatiotemporal approach allowed the identification of geographic outliers with unexplained risk patterns. This approach, in addition to guiding targeted community-based interventions and identifying new hypotheses, may have general applicability towards addressing environmentally-transmitted diseases that have emerged in complex urban slum settings. PMID:26771379

  4. Brownian motion on random dynamical landscapes

    NASA Astrophysics Data System (ADS)

    Suñé Simon, Marc; Sancho, José María; Lindenberg, Katja

    2016-03-01

    We present a study of overdamped Brownian particles moving on a random landscape of dynamic and deformable obstacles (spatio-temporal disorder). The obstacles move randomly, assemble, and dissociate following their own dynamics. This landscape may account for a soft matter or liquid environment in which large obstacles, such as macromolecules and organelles in the cytoplasm of a living cell, or colloids or polymers in a liquid, move slowly leading to crowding effects. This representation also constitutes a novel approach to the macroscopic dynamics exhibited by active matter media. We present numerical results on the transport and diffusion properties of Brownian particles under this disorder biased by a constant external force. The landscape dynamics are characterized by a Gaussian spatio-temporal correlation, with fixed time and spatial scales, and controlled obstacle concentrations.

  5. Short-Time Dynamics of the Random n-Vector Model

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Li, Zhi-Bing; Fang, Hai; He, Shun-Shan; Situ, Shu-Ping

    2001-11-01

    Short-time critical behavior of the random n-vector model is studied by the theoretic renormalization-group approach. Asymptotic scaling laws are studied in a frame of the expansion in ɛ=4-d for n≠1 and {√ɛ} for n=1 respectively. In d<4, the initial slip exponents θ‧ for the order parameter and θ for the response function are calculated up to the second order in ɛ=4-d for n≠1 and {√ɛ} for n=1 at the random fixed point respectively. Our results show that the random impurities exert a strong influence on the short-time dynamics for d<4 and n

  6. Future evolution in a backreaction model and the analogous scalar field cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Amna; Majumdar, A.S., E-mail: amnaalig@gmail.com, E-mail: archan@bose.res.in

    We investigate the future evolution of the universe using the Buchert framework for averaged backreaction in the context of a two-domain partition of the universe. We show that this approach allows for the possibility of the global acceleration vanishing at a finite future time, provided that none of the subdomains accelerate individually. The model at large scales is analogously described in terms of a homogeneous scalar field emerging with a potential that is fixed and free from phenomenological parametrization. The dynamics of this scalar field is explored in the analogous FLRW cosmology. We use observational data from Type Ia Supernovae,more » Baryon Acoustic Oscillations, and Cosmic Microwave Background to constrain the parameters of the model for a viable cosmology, providing the corresponding likelihood contours.« less

  7. The effect of surface tension on steadily translating bubbles in an unbounded Hele-Shaw cell

    PubMed Central

    2017-01-01

    New numerical solutions to the so-called selection problem for one and two steadily translating bubbles in an unbounded Hele-Shaw cell are presented. Our approach relies on conformal mapping which, for the two-bubble problem, involves the Schottky-Klein prime function associated with an annulus. We show that a countably infinite number of solutions exist for each fixed value of dimensionless surface tension, with the bubble shapes becoming more exotic as the solution branch number increases. Our numerical results suggest that a single solution is selected in the limit that surface tension vanishes, with the scaling between the bubble velocity and surface tension being different to the well-studied problems for a bubble or a finger propagating in a channel geometry. PMID:28588410

  8. Nutrient Limitation of Native and Invasive N2-Fixing Plants in Northwest Prairies

    PubMed Central

    Thorpe, Andrea S.; Perakis, Steven; Catricala, Christina; Kaye, Thomas N.

    2013-01-01

    Nutrient rich conditions often promote plant invasions, yet additions of non-nitrogen (N) nutrients may provide a novel approach for conserving native symbiotic N-fixing plants in otherwise N-limited ecosystems. Lupinus oreganus is a threatened N-fixing plant endemic to prairies in western Oregon and southwest Washington (USA). We tested the effect of non-N fertilizers on the growth, reproduction, tissue N content, and stable isotope δ15N composition of Lupinus at three sites that differed in soil phosphorus (P) and N availability. We also examined changes in other Fabaceae (primarily Vicia sativa and V. hirsuta) and cover of all plant species. Variation in background soil P and N availability shaped patterns of nutrient limitation across sites. Where soil P and N were low, P additions increased Lupinus tissue N and altered foliar δ15N, suggesting P limitation of N fixation. Where soil P was low but N was high, P addition stimulated growth and reproduction in Lupinus. At a third site, with higher soil P, only micro- and macronutrient fertilization without N and P increased Lupinus growth and tissue N. Lupinus foliar δ15N averaged −0.010‰ across all treatments and varied little with tissue N, suggesting consistent use of fixed N. In contrast, foliar δ15N of Vicia spp. shifted towards 0‰ as tissue N increased, suggesting that conditions fostering N fixation may benefit these exotic species. Fertilization increased cover, N fixation, and tissue N of non-target, exotic Fabaceae, but overall plant community structure shifted at only one site, and only after the dominant Lupinus was excluded from analyses. Our finding that non-N fertilization increased the performance of Lupinus with few community effects suggests a potential strategy to aid populations of threatened legume species. The increase in exotic Fabaceae species that occurred with fertilization further suggests that monitoring and adaptive management should accompany any large scale applications. PMID:24386399

  9. Nutrient limitation of native and invasive N2-fixing plants in northwest prairies

    USGS Publications Warehouse

    Thorpe, Andrea S.; Perakis, Steven S.; Catricala, Christina; Kaye, Thomas N.

    2013-01-01

    Nutrient rich conditions often promote plant invasions, yet additions of non-nitrogen (N) nutrients may provide a novel approach for conserving native symbiotic N-fixing plants in otherwise N-limited ecosystems. Lupinus oreganus is a threatened N-fixing plant endemic to prairies in western Oregon and southwest Washington (USA). We tested the effect of non-N fertilizers on the growth, reproduction, tissue N content, and stable isotope δ15N composition of Lupinus at three sites that differed in soil phosphorus (P) and N availability. We also examined changes in other Fabaceae (primarily Vicia sativa and V. hirsuta) and cover of all plant species. Variation in background soil P and N availability shaped patterns of nutrient limitation across sites. Where soil P and N were low, P additions increased Lupinus tissue N and altered foliar δ15N, suggesting P limitation of N fixation. Where soil P was low but N was high, P addition stimulated growth and reproduction in Lupinus. At a third site, with higher soil P, only micro- and macronutrient fertilization without N and P increased Lupinus growth and tissue N. Lupinus foliar δ15N averaged −0.010‰ across all treatments and varied little with tissue N, suggesting consistent use of fixed N. In contrast, foliar δ15N of Vicia spp. shifted towards 0‰ as tissue N increased, suggesting that conditions fostering N fixation may benefit these exotic species. Fertilization increased cover, N fixation, and tissue N of non-target, exotic Fabaceae, but overall plant community structure shifted at only one site, and only after the dominant Lupinus was excluded from analyses. Our finding that non-N fertilization increased the performance of Lupinus with few community effects suggests a potential strategy to aid populations of threatened legume species. The increase in exotic Fabaceae species that occurred with fertilization further suggests that monitoring and adaptive management should accompany any large scale applications.

  10. SU-E-T-539: Fixed Versus Variable Optimization Points in Combined-Mode Modulated Arc Therapy Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kainz, K; Prah, D; Ahunbay, E

    2014-06-01

    Purpose: A novel modulated arc therapy technique, mARC, enables superposition of step-and-shoot IMRT segments upon a subset of the optimization points (OPs) of a continuous-arc delivery. We compare two approaches to mARC planning: one with the number of OPs fixed throughout optimization, and another where the planning system determines the number of OPs in the final plan, subject to an upper limit defined at the outset. Methods: Fixed-OP mARC planning was performed for representative cases using Panther v. 5.01 (Prowess, Inc.), while variable-OP mARC planning used Monaco v. 5.00 (Elekta, Inc.). All Monaco planning used an upper limit of 91more » OPs; those OPs with minimal MU were removed during optimization. Plans were delivered, and delivery times recorded, on a Siemens Artiste accelerator using a flat 6MV beam with 300 MU/min rate. Dose distributions measured using ArcCheck (Sun Nuclear Corporation, Inc.) were compared with the plan calculation; the two were deemed consistent if they agreed to within 3.5% in absolute dose and 3.5 mm in distance-to-agreement among > 95% of the diodes within the direct beam. Results: Example cases included a prostate and a head-and-neck planned with a single arc and fraction doses of 1.8 and 2.0 Gy, respectively. Aside from slightly more uniform target dose for the variable-OP plans, the DVHs for the two techniques were similar. For the fixed-OP technique, the number of OPs was 38 and 39, and the delivery time was 228 and 259 seconds, respectively, for the prostate and head-and-neck cases. For the final variable-OP plans, there were 91 and 85 OPs, and the delivery time was 296 and 440 seconds, correspondingly longer than for fixed-OP. Conclusion: For mARC, both the fixed-OP and variable-OP approaches produced comparable-quality plans whose delivery was successfully verified. To keep delivery time per fraction short, a fixed-OP planning approach is preferred.« less

  11. SU-F-T-124: Radiation Biological Equivalent Presentations OfLEM-1 and MKM Approaches in the Carbon-Ion Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Jiang, G; Sheng, Y

    Purpose: To study the correlations of the radiation biological equivalent doses (BED) along depth and lateral distance between LEM-1 and MKM approaches. Methods: In NIRS-MKM (Microdosimetric Kinetic Model) approach, the prescribed BED, referred as C-Eq, doses aims to present the relative biological effectiveness (RBE) for different energies of carbon-ions on a fixed 10% survival value of HCG cell with respect to convention X-ray. Instead of a fixed 10% survival, the BED doses of LEM-1 (Local Effect Model) approach, referred as X-Eq, aims to present the RBE over the whole survival curve of chordoma-like cell with alpha/beta ratio of 2.0. Themore » relationship of physical doses as a function of C-Eq and X-Eq doses were investigated along depth and lateral distance for various sizes of cubic targets in water irradiated by carbon-ions. Results: At the center of each cubic target, the trends between physical and C-Eq or X-Eq doses can be described by a linear and 2nd order polynomial functions, respectively. Using fit functions can then calculate a scaling factor between C-Eq and X-Eq doses to have similar physical doses. With equalized C-Eq and X-Eq doses at the depth of target center, over- and under-estimated X-Eq to C-Eq are seen for depths before and after the target center, respectively. Near the distal edge along depth, sharp rising of RBE value is observed for X-Eq, but sharp dropping of RBE value is observed for C-Eq. For lateral locations near and just outside 50% dose level, sharp raising of RBE value is also seen for X-Eq, while only minor increasing with fast dropping for C-Eq. Conclusion: An analytical function to model the differences between the CEq and X-Eq doses along depth and lateral distance need to further investigated to explain varied clinic outcome of specific cancers using two different approaches to calculated BED doses.« less

  12. Debt-maturity structures should match risk preferences.

    PubMed

    Gapenski, L C

    1999-12-01

    Key to any debt-maturity matching strategy is financing assets with the appropriate debt structure. Financial managers need to establish an optimal capital structure and then choose the best maturity-matching structure for their debt. Two maturity-matching strategies that are available to healthcare financial managers are the accounting approach and the finance approach. The accounting approach, which defines asset maturities as current or fixed, is a riskier financing strategy than the finance approach, which defines asset maturities as permanent or temporary. The added risk occurs because of the accounting approach's heavy reliance on short-term debt. The accounting approach offers the potential for lower costs at the expense of higher risk. Healthcare financial managers who believe the financing function should support the organization's operations without adding undue risk should use the finance approach to maturity matching. Asset maturities in those organizations then should be considered permanent or temporary rather than current or fixed, and the debt-maturity structure should reflect this.

  13. Effect of Fixed Versus Adjusted Transcutaneous Electrical Nerve Stimulation Amplitude on Chronic Mechanical Low Back Pain.

    PubMed

    Elserty, Noha; Kattabei, Omaima; Elhafez, Hytham

    2016-07-01

    This study aimed to investigate the effect of adjusting pulse amplitude of transcutaneous electrical nerve stimulation versus fixed pulse amplitude in treatment of chronic mechanical low back pain. Randomized clinical trial. El-sahel Teaching Hospital, Egypt. Forty-five patients with chronic low back pain assigned to three equal groups. Their ages ranged from 20 to 50 years. The three groups received the same exercise program. Group A received transcutaneous electrical nerve stimulation with fixed pulse amplitude for 40 minutes. Group B received transcutaneous electrical nerve stimulation with adjusted pulse amplitude for 40 minutes, with the pulse amplitude adjusted every 5 minutes. Group C received exercises only. Treatment sessions were applied three times per week for 4 weeks for the three groups. A visual analogue scale was used to assess pain severity, the Oswestry Disability Index was used to assess functional level, and a dual inclinometer was used to measure lumbar range of motion. Evaluations were performed before and after treatment. Visual analogue scale, Oswestry Disability Index, and back range of motion significantly differed between the two groups that received transcutaneous electrical nerve stimulation and the control group and did not significantly differ between fixed and adjusted pulse amplitude of transcutaneous electrical nerve stimulation. Adjusting pulse amplitude of transcutaneous electrical nerve stimulation does not produce a difference in the effect of transcutaneous electrical nerve stimulation used to treat chronic low back pain.

  14. Botulinum Toxin in Parkinson Disease Tremor: A Randomized, Double-Blind, Placebo-Controlled Study With a Customized Injection Approach.

    PubMed

    Mittal, Shivam Om; Machado, Duarte; Richardson, Diana; Dubey, Divyanshu; Jabbari, Bahman

    2017-09-01

    In essential tremor and Parkinson disease (PD) tremor, administration of onabotulinumtoxinA via a fixed injection approach improves the tremor, but many patients (30%-70%) develop moderate to severe hand weakness, limiting the use of onabotulinumtoxinA in clinical practice. To evaluate the safety and efficacy of incobotulinumtoxinA (IncoA) injection for the treatment of tremor in PD. In this double-blind, placebo-controlled, crossover trial, 30 patients each received 7 to 12 (mean, 9) IncoA injections into hand and forearm muscles using a customized approach. The study was performed from June 1, 2012, through June 30, 2015, and participants were followed for 24 weeks. Treatment efficacy was evaluated by the tremor subsets of the Unified Parkinson's Disease Rating Scale and the Patient Global Impression of Change 4 and 8 weeks after each of the 2 sets of treatments. Hand strength was assessed using an ergometer. There was a statistically significant improvement in clinical rating scores of rest tremor and tremor severity 4 and 8 weeks after the IncoA injection and of action/postural tremor at 8 weeks. There was a significant improvement in patient perception of improvement at 4 and 8 weeks in the IncoA group. There was no statistically significant difference in grip strength at 4 weeks between the 2 groups. Injection of IncoA via a customized approach improved PD tremor on a clinical scale and patient perception, with a low occurrence of significant hand weakness. clinicaltrials.gov Identifier: NCT02419313. Copyright © 2017 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  15. Force-Induced Rupture of a DNA Duplex: From Fundamentals to Force Sensors.

    PubMed

    Mosayebi, Majid; Louis, Ard A; Doye, Jonathan P K; Ouldridge, Thomas E

    2015-12-22

    The rupture of double-stranded DNA under stress is a key process in biophysics and nanotechnology. In this article, we consider the shear-induced rupture of short DNA duplexes, a system that has been given new importance by recently designed force sensors and nanotechnological devices. We argue that rupture must be understood as an activated process, where the duplex state is metastable and the strands will separate in a finite time that depends on the duplex length and the force applied. Thus, the critical shearing force required to rupture a duplex depends strongly on the time scale of observation. We use simple models of DNA to show that this approach naturally captures the observed dependence of the force required to rupture a duplex within a given time on duplex length. In particular, this critical force is zero for the shortest duplexes, before rising sharply and then plateauing in the long length limit. The prevailing approach, based on identifying when the presence of each additional base pair within the duplex is thermodynamically unfavorable rather than allowing for metastability, does not predict a time-scale-dependent critical force and does not naturally incorporate a critical force of zero for the shortest duplexes. We demonstrate that our findings have important consequences for the behavior of a new force-sensing nanodevice, which operates in a mixed mode that interpolates between shearing and unzipping. At a fixed time scale and duplex length, the critical force exhibits a sigmoidal dependence on the fraction of the duplex that is subject to shearing.

  16. Acoustic travel time gauges for in-situ determination of pressure and temperature in multi-anvil apparatus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xuebing; Chen, Ting; Qi, Xintong

    In this study, we developed a new method for in-situ pressure determination in multi-anvil, high-pressure apparatus using an acoustic travel time approach within the framework of acoustoelasticity. The ultrasonic travel times of polycrystalline Al{sub 2}O{sub 3} were calibrated against NaCl pressure scale up to 15 GPa and 900 °C in a Kawai-type double-stage multi-anvil apparatus in conjunction with synchrotron X-radiation, thereby providing a convenient and reliable gauge for pressure determination at ambient and high temperatures. The pressures derived from this new travel time method are in excellent agreement with those from the fixed-point methods. Application of this new pressure gauge in anmore » offline experiment revealed a remarkable agreement of the densities of coesite with those from the previous single crystal compression studies under hydrostatic conditions, thus providing strong validation for the current travel time pressure scale. The travel time approach not only can be used for continuous in-situ pressure determination at room temperature, high temperatures, during compression and decompression, but also bears a unique capability that none of the previous scales can deliver, i.e., simultaneous pressure and temperature determination with a high accuracy (±0.16 GPa in pressure and ±17 °C in temperature). Therefore, the new in-situ Al{sub 2}O{sub 3} pressure gauge is expected to enable new and expanded opportunities for offline laboratory studies of solid and liquid materials under high pressure and high temperature in multi-anvil apparatus.« less

  17. Monitoring fossil fuel sources of methane in Australia

    NASA Astrophysics Data System (ADS)

    Loh, Zoe; Etheridge, David; Luhar, Ashok; Hibberd, Mark; Thatcher, Marcus; Noonan, Julie; Thornton, David; Spencer, Darren; Gregory, Rebecca; Jenkins, Charles; Zegelin, Steve; Leuning, Ray; Day, Stuart; Barrett, Damian

    2017-04-01

    CSIRO has been active in identifying and quantifying methane emissions from a range of fossil fuel sources in Australia over the past decade. We present here a history of the development of our work in this domain. While we have principally focused on optimising the use of long term, fixed location, high precision monitoring, paired with both forward and inverse modelling techniques suitable either local or regional scales, we have also incorporated mobile ground surveys and flux calculations from plumes in some contexts. We initially developed leak detection methodologies for geological carbon storage at a local scale using a Bayesian probabilistic approach coupled to a backward Lagrangian particle dispersion model (Luhar et al. JGR, 2014), and single point monitoring with sector analysis (Etheridge et al. In prep.) We have since expanded our modelling techniques to regional scales using both forward and inverse approaches to constrain methane emissions from coal mining and coal seam gas (CSG) production. The Surat Basin (Queensland, Australia) is a region of rapidly expanding CSG production, in which we have established a pair of carefully located, well-intercalibrated monitoring stations. These data sets provide an almost continuous record of (i) background air arriving at the Surat Basin, and (ii) the signal resulting from methane emissions within the Basin, i.e. total downwind methane concentration (comprising emissions including natural geological seeps, agricultural and biogenic sources and fugitive emissions from CSG production) minus background or upwind concentration. We will present our latest results on monitoring from the Surat Basin and their application to estimating methane emissions.

  18. Fixed Point Learning Based Intelligent Traffic Control System

    NASA Astrophysics Data System (ADS)

    Zongyao, Wang; Cong, Sui; Cheng, Shao

    2017-10-01

    Fixed point learning has become an important tool to analyse large scale distributed system such as urban traffic network. This paper presents a fixed point learning based intelligence traffic network control system. The system applies convergence property of fixed point theorem to optimize the traffic flow density. The intelligence traffic control system achieves maximum road resources usage by averaging traffic flow density among the traffic network. The intelligence traffic network control system is built based on decentralized structure and intelligence cooperation. No central control is needed to manage the system. The proposed system is simple, effective and feasible for practical use. The performance of the system is tested via theoretical proof and simulations. The results demonstrate that the system can effectively solve the traffic congestion problem and increase the vehicles average speed. It also proves that the system is flexible, reliable and feasible for practical use.

  19. Incorporating economies of scale in the cost estimation in economic evaluation of PCV and HPV vaccination programmes in the Philippines: a game changer?

    PubMed

    Suwanthawornkul, Thanthima; Praditsitthikorn, Naiyana; Kulpeng, Wantanee; Haasis, Manuel Alexander; Guerrero, Anna Melissa; Teerawattananon, Yot

    2018-01-01

    Many economic evaluations ignore economies of scale in their cost estimation, which means that cost parameters are assumed to have a linear relationship with the level of production. Economies of scale is the situation when the average total cost of producing a product decreases with increasing volume caused by reducing the variable costs due to more efficient operation. This study investigates the significance of applying the economies of scale concept: the saving in costs gained by an increased level of production in economic evaluation of pneumococcal conjugate vaccines (PCV) and human papillomavirus (HPV) vaccinations. The fixed and variable costs of providing partial (20% coverage) and universal (100% coverage) vaccination programs in the Philippines were estimated using various methods, including costs of conducting questionnaire survey, focus-group discussion, and analysis of secondary data. Costing parameters were utilised as inputs for the two economic evaluation models for PCV and HPV. Incremental cost-effectiveness ratios (ICERs) and 5-year budget impacts with and without applying economies of scale to the costing parameters for partial and universal coverage were compared in order to determine the effect of these different costing approaches. The program costs of the partial coverage for the two immunisation programs were not very different when applying and not applying the economies of scale concept. Nevertheless, the program costs for universal coverage were 0.26 and 0.32 times lower when applying economies of scale compared to not applying economies of scale for the pneumococcal and human papillomavirus vaccinations, respectively. ICERs varied by up to 98% for pneumococcal vaccinations, whereas the change in ICERs in the human papillomavirus vaccination depended on both the costs of cervical cancer screening and the vaccination program. This results in a significant difference in the 5-year budget impact, accounting for 30 and 40% of reduction in the 5-year budget impact for the pneumococcal and human papillomavirus vaccination programs. This study demonstrated the feasibility and importance of applying economies of scale in the cost estimation in economic evaluation, which would lead to different conclusions in terms of value for money regarding the interventions, particularly with population-wide interventions such as vaccination programs. The economies of scale approach to costing is recommended for the creation of methodological guidelines for conducting economic evaluations.

  20. Scaled Centrifugal Compressor Program.

    DTIC Science & Technology

    1986-10-31

    small compressors in turbo - shaft, turbofan , and turboprop engines used in rotorcraft; fixed-wing general aviation, and cruise missile aircraft . Included...AD-A±74 "I SCALED CENTRIFUGAL COMPRESSOR PEOGRAN(U) GARRETT13 TURBINE ENGINE CO PHOENIX AZ G CRGILL ET AL. 31 OCT 86 21-5464 MASA-CR-i?4912 NAS3...REPORT 6’ FOR SCALED CENTRIFUGAL COMPRESSOR PROGRAM GARRETT TURBINE ENGINE COMPANY A DIVISION OF THE GARRETT CORPORATION I111 SOUTH 34TH STREET - P.O

  1. Gauge fixing and BFV quantization

    NASA Astrophysics Data System (ADS)

    Rogers, Alice

    2000-01-01

    Non-singularity conditions are established for the Batalin-Fradkin-Vilkovisky (BFV) gauge-fixing fermion which are sufficient for it to lead to the correct path integral for a theory with constraints canonically quantized in the BFV approach. The conditions ensure that the anticommutator of this fermion with the BRST charge regularizes the path integral by regularizing the trace over non-physical states in each ghost sector. The results are applied to the quantization of a system which has a Gribov problem, using a non-standard form of the gauge-fixing fermion.

  2. Anomalous properties of the acoustic excitations in glasses on the mesoscopic length scale.

    PubMed

    Monaco, Giulio; Mossa, Stefano

    2009-10-06

    The low-temperature thermal properties of dielectric crystals are governed by acoustic excitations with large wavelengths that are well described by plane waves. This is the Debye model, which rests on the assumption that the medium is an elastic continuum, holds true for acoustic wavelengths large on the microscopic scale fixed by the interatomic spacing, and gradually breaks down on approaching it. Glasses are characterized as well by universal low-temperature thermal properties that are, however, anomalous with respect to those of the corresponding crystalline phases. Related universal anomalies also appear in the low-frequency vibrational density of states and, despite a longstanding debate, remain poorly understood. By using molecular dynamics simulations of a model monatomic glass of extremely large size, we show that in glasses the structural disorder undermines the Debye model in a subtle way: The elastic continuum approximation for the acoustic excitations breaks down abruptly on the mesoscopic, medium-range-order length scale of approximately 10 interatomic spacings, where it still works well for the corresponding crystalline systems. On this scale, the sound velocity shows a marked reduction with respect to the macroscopic value. This reduction turns out to be closely related to the universal excess over the Debye model prediction found in glasses at frequencies of approximately 1 THz in the vibrational density of states or at temperatures of approximately 10 K in the specific heat.

  3. Asymmetric fluid criticality. I. Scaling with pressure mixing.

    PubMed

    Kim, Young C; Fisher, Michael E; Orkoulas, G

    2003-06-01

    The thermodynamic behavior of a fluid near a vapor-liquid and, hence, asymmetric critical point is discussed within a general "complete" scaling theory incorporating pressure mixing in the nonlinear scaling fields as well as corrections to scaling. This theory allows for a Yang-Yang anomaly in which mu(")(sigma)(T), the second temperature derivative of the chemical potential along the phase boundary, diverges like the specific heat when T-->T(c); it also generates a leading singular term, /t/(2beta), in the coexistence curve diameter, where t[triple bond](T-T(c))/T(c). The behavior of various special loci, such as the critical isochore, the critical isotherm, the k-inflection loci, on which chi((k))[triple bond]chi(rho,T)/rho(k) (with chi=rho(2)k(B)TK(T)) and C((k))(V)[triple bond]C(V)(rho,T)/rho(k) are maximal at fixed T, is carefully elucidated. These results are useful for analyzing simulations and experiments, since particular, nonuniversal values of k specify loci that approach the critical density most rapidly and reflect the pressure-mixing coefficient. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte. For comparison, a discussion of the classical (or Landau) theory is presented briefly and various interesting loci are determined explicitly and illustrated quantitatively for a van der Waals fluid.

  4. Factorization and resummation of Higgs boson differential distributions in soft-collinear effective theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mantry, Sonny; Petriello, Frank

    We derive a factorization theorem for the Higgs boson transverse momentum (p{sub T}) and rapidity (Y) distributions at hadron colliders, using the soft-collinear effective theory (SCET), for m{sub h}>>p{sub T}>>{Lambda}{sub QCD}, where m{sub h} denotes the Higgs mass. In addition to the factorization of the various scales involved, the perturbative physics at the p{sub T} scale is further factorized into two collinear impact-parameter beam functions (IBFs) and an inverse soft function (ISF). These newly defined functions are of a universal nature for the study of differential distributions at hadron colliders. The additional factorization of the p{sub T}-scale physics simplifies themore » implementation of higher order radiative corrections in {alpha}{sub s}(p{sub T}). We derive formulas for factorization in both momentum and impact parameter space and discuss the relationship between them. Large logarithms of the relevant scales in the problem are summed using the renormalization group equations of the effective theories. Power corrections to the factorization theorem in p{sub T}/m{sub h} and {Lambda}{sub QCD}/p{sub T} can be systematically derived. We perform multiple consistency checks on our factorization theorem including a comparison with known fixed-order QCD results. We compare the SCET factorization theorem with the Collins-Soper-Sterman approach to low-p{sub T} resummation.« less

  5. Universal self-similar dynamics of relativistic and nonrelativistic field theories near nonthermal fixed points

    NASA Astrophysics Data System (ADS)

    Piñeiro Orioli, Asier; Boguslavski, Kirill; Berges, Jürgen

    2015-07-01

    We investigate universal behavior of isolated many-body systems far from equilibrium, which is relevant for a wide range of applications from ultracold quantum gases to high-energy particle physics. The universality is based on the existence of nonthermal fixed points, which represent nonequilibrium attractor solutions with self-similar scaling behavior. The corresponding dynamic universality classes turn out to be remarkably large, encompassing both relativistic as well as nonrelativistic quantum and classical systems. For the examples of nonrelativistic (Gross-Pitaevskii) and relativistic scalar field theory with quartic self-interactions, we demonstrate that infrared scaling exponents as well as scaling functions agree. We perform two independent nonperturbative calculations, first by using classical-statistical lattice simulation techniques and second by applying a vertex-resummed kinetic theory. The latter extends kinetic descriptions to the nonperturbative regime of overoccupied modes. Our results open new perspectives to learn from experiments with cold atoms aspects about the dynamics during the early stages of our universe.

  6. Choice of time-scale in Cox's model analysis of epidemiologic cohort data: a simulation study.

    PubMed

    Thiébaut, Anne C M; Bénichou, Jacques

    2004-12-30

    Cox's regression model is widely used for assessing associations between potential risk factors and disease occurrence in epidemiologic cohort studies. Although age is often a strong determinant of disease risk, authors have frequently used time-on-study instead of age as the time-scale, as for clinical trials. Unless the baseline hazard is an exponential function of age, this approach can yield different estimates of relative hazards than using age as the time-scale, even when age is adjusted for. We performed a simulation study in order to investigate the existence and magnitude of bias for different degrees of association between age and the covariate of interest. Age to disease onset was generated from exponential, Weibull or piecewise Weibull distributions, and both fixed and time-dependent dichotomous covariates were considered. We observed no bias upon using age as the time-scale. Upon using time-on-study, we verified the absence of bias for exponentially distributed age to disease onset. For non-exponential distributions, we found that bias could occur even when the covariate of interest was independent from age. It could be severe in case of substantial association with age, especially with time-dependent covariates. These findings were illustrated on data from a cohort of 84,329 French women followed prospectively for breast cancer occurrence. In view of our results, we strongly recommend not using time-on-study as the time-scale for analysing epidemiologic cohort data. 2004 John Wiley & Sons, Ltd.

  7. Tailoring treatment of haemophilia B: accounting for the distribution and clearance of standard and extended half-life FIX concentrates.

    PubMed

    Iorio, Alfonso; Fischer, Kathelijn; Blanchette, Victor; Rangarajan, Savita; Young, Guy; Morfini, Massimo

    2017-06-02

    The prophylactic administration of factor IX (FIX) is considered the most effective treatment for haemophilia B. The inter-individual variability and complexity of the pharmacokinetics (PK) of FIX, and the rarity of the disease have hampered identification of an optimal treatment regimens. The recent introduction of extended half-life recombinant FIX molecules (EHL-rFIX), has prompted a thorough reassessment of the clinical efficacy, PK and pharmacodynamics of plasma-derived and recombinant FIX. First, using longer sampling times and multi-compartmental PK models has led to more precise (and favourable) PK for FIX than was appreciated in the past. Second, investigating the distribution of FIX in the body beyond the vascular space (which is implied by its complex kinetics) has opened a new research field on the role for extravascular FIX. Third, measuring plasma levels of EHL-rFIX has shown that different aPTT reagents have different accuracy in measuring different FIX molecules. How will this new knowledge reflect on clinical practice? Clinical decision making in haemophilia B requires some caution and expertise. First, comparisons between different FIX molecules must be assessed taking into consideration the comparability of the populations studied and the PK models used. Second, individual PK estimates must rely on multi-compartmental models, and would benefit from adopting a population PK approach. Optimal sampling times need to be adapted to the prolonged half-life of the new EHL FIX products. Finally, costs considerations may apply, which is beyond the scope of this manuscript but might be deeply connected with the PK considerations discussed in this communication.

  8. Multi-Product Total Cost Functions for Higher Education: The Case of Chinese Research Universities

    ERIC Educational Resources Information Center

    Longlong, Hou; Fengliang, Li; Weifang, Min

    2009-01-01

    This paper empirically investigates the economies of scale and economies of scope for the Chinese research universities by employing the flexible fixed cost quadratic (FFCQ) function. The empirical results show that both economies of scale and economies of scope exist in the Chinese higher education system and support the common belief of…

  9. Fully implicit adaptive mesh refinement MHD algorithm

    NASA Astrophysics Data System (ADS)

    Philip, Bobby

    2005-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former results in stiffness due to the presence of very fast waves. The latter requires one to resolve the localized features that the system develops. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. To our knowledge, a scalable, fully implicit AMR algorithm has not been accomplished before for MHD. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technologyootnotetextL. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite --FAC-- algorithms) for scalability. We will demonstrate that the concept is indeed feasible, featuring optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations will be presented on a variety of problems.

  10. Fully implicit adaptive mesh refinement algorithm for reduced MHD

    NASA Astrophysics Data System (ADS)

    Philip, Bobby; Pernice, Michael; Chacon, Luis

    2006-10-01

    In the macroscopic simulation of plasmas, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. Traditional approaches based on explicit time integration techniques and fixed meshes are not suitable for this challenge, as such approaches prevent the modeler from using realistic plasma parameters to keep the computation feasible. We propose here a novel approach, based on implicit methods and structured adaptive mesh refinement (SAMR). Our emphasis is on both accuracy and scalability with the number of degrees of freedom. As a proof-of-principle, we focus on the reduced resistive MHD model as a basic MHD model paradigm, which is truly multiscale. The approach taken here is to adapt mature physics-based technology to AMR grids, and employ AMR-aware multilevel techniques (such as fast adaptive composite grid --FAC-- algorithms) for scalability. We demonstrate that the concept is indeed feasible, featuring near-optimal scalability under grid refinement. Results of fully-implicit, dynamically-adaptive AMR simulations in challenging dissipation regimes will be presented on a variety of problems that benefit from this capability, including tearing modes, the island coalescence instability, and the tilt mode instability. L. Chac'on et al., J. Comput. Phys. 178 (1), 15- 36 (2002) B. Philip, M. Pernice, and L. Chac'on, Lecture Notes in Computational Science and Engineering, accepted (2006)

  11. Towards the Development of a Low Cost Airborne Sensing System to Monitor Dust Particles after Blasting at Open-Pit Mine Sites

    PubMed Central

    Alvarado, Miguel; Gonzalez, Felipe; Fletcher, Andrew; Doshi, Ashray

    2015-01-01

    Blasting is an integral part of large-scale open cut mining that often occurs in close proximity to population centers and often results in the emission of particulate material and gases potentially hazardous to health. Current air quality monitoring methods rely on limited numbers of fixed sampling locations to validate a complex fluid environment and collect sufficient data to confirm model effectiveness. This paper describes the development of a methodology to address the need of a more precise approach that is capable of characterizing blasting plumes in near-real time. The integration of the system required the modification and integration of an opto-electrical dust sensor, SHARP GP2Y10, into a small fixed-wing and multi-rotor copter, resulting in the collection of data streamed during flight. The paper also describes the calibration of the optical sensor with an industry grade dust-monitoring device, Dusttrak 8520, demonstrating a high correlation between them, with correlation coefficients (R2) greater than 0.9. The laboratory and field tests demonstrate the feasibility of coupling the sensor with the UAVs. However, further work must be done in the areas of sensor selection and calibration as well as flight planning. PMID:26274959

  12. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  13. Towards the Development of a Low Cost Airborne Sensing System to Monitor Dust Particles after Blasting at Open-Pit Mine Sites.

    PubMed

    Alvarado, Miguel; Gonzalez, Felipe; Fletcher, Andrew; Doshi, Ashray

    2015-08-12

    Blasting is an integral part of large-scale open cut mining that often occurs in close proximity to population centers and often results in the emission of particulate material and gases potentially hazardous to health. Current air quality monitoring methods rely on limited numbers of fixed sampling locations to validate a complex fluid environment and collect sufficient data to confirm model effectiveness. This paper describes the development of a methodology to address the need of a more precise approach that is capable of characterizing blasting plumes in near-real time. The integration of the system required the modification and integration of an opto-electrical dust sensor, SHARP GP2Y10, into a small fixed-wing and multi-rotor copter, resulting in the collection of data streamed during flight. The paper also describes the calibration of the optical sensor with an industry grade dust-monitoring device, Dusttrak 8520, demonstrating a high correlation between them, with correlation coefficients (R(2)) greater than 0.9. The laboratory and field tests demonstrate the feasibility of coupling the sensor with the UAVs. However, further work must be done in the areas of sensor selection and calibration as well as flight planning.

  14. High-spatial-resolution electron density measurement by Langmuir probe for multi-point observations using tiny spacecraft

    NASA Astrophysics Data System (ADS)

    Hoang, H.; Røed, K.; Bekkeng, T. A.; Trondsen, E.; Clausen, L. B. N.; Miloch, W. J.; Moen, J. I.

    2017-11-01

    A method for evaluating electron density using a single fixed-bias Langmuir probe is presented. The technique allows for high-spatio-temporal resolution electron density measurements, which can be effectively carried out by tiny spacecraft for multi-point observations in the ionosphere. The results are compared with the multi-needle Langmuir probe system, which is a scientific instrument developed at the University of Oslo comprising four fixed-bias cylindrical probes that allow small-scale plasma density structures to be characterized in the ionosphere. The technique proposed in this paper can comply with the requirements of future small-sized spacecraft, where the cost-effectiveness, limited space available on the craft, low power consumption and capacity for data-links need to be addressed. The first experimental results in both the plasma laboratory and space confirm the efficiency of the new approach. Moreover, detailed analyses on two challenging issues when deploying the DC Langmuir probe on a tiny spacecraft, which are the limited conductive area of the spacecraft and probe surface contamination, are presented in the paper. It is demonstrated that the limited conductive area, depending on applications, can either be of no concern for the experiment or can be resolved by mitigation methods. Surface contamination has a small impact on the performance of the developed probe.

  15. Monte Carlo simulations of lattice models for single polymer systems

    NASA Astrophysics Data System (ADS)

    Hsu, Hsiao-Ping

    2014-10-01

    Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.

  16. Impact of a fixed price system on the supply of institutional long-term care: a comparative study of Japanese and German metropolitan areas.

    PubMed

    Yoshida, Keiko; Kawahara, Kazuo

    2014-02-01

    The need for institutional long-term care is increasing as the population ages and the pool of informal care givers declines. Care services are often limited when funding is controlled publicly. Fees for Japanese institutional care are publicly fixed and supply is short, particularly in expensive metropolitan areas. Those insured by universal long-term care insurance (LTCI) are faced with geographically inequitable access. The aim of this study was to examine the impact of a fixed price system on the supply of institutional care in terms of equity. The data were derived from official statistics sources in both Japan and Germany, and a self-administered questionnaire was used in Japan in 2011. Cross-sectional multiple regression analyses were used to examine factors affecting bed supply of institutional/residential care in fixed price and free prices systems in Tokyo (Japan), and an individually-bargained price system in North Rhine-Westphalia (Germany). Variables relating to costs and needs were used to test hypotheses of cost-dependency and need-orientation of bed supply in each price system. Analyses were conducted using data both before and after the introduction of LTCI, and the results of each system were qualitatively compared. Total supply of institutional care in Tokyo under fixed pricing was found to be cost-dependent regarding capital costs and scale economies, and negatively related to need. These relationships have however weakened in recent years, possibly caused by political interventions under LTCI. Supply of residential care in Tokyo under free pricing was need-oriented and cost-dependent only regarding scale economies. Supply in North Rhine-Westphalia under individually bargained pricing was cost-independent and not negatively related to need. Findings suggest that publicly funded fixed prices have a negative impact on geographically equitable supply of institutional care. The contrasting results of the non-fixed-price systems for Japanese residential care and German institutional care provide further theoretical supports for this and indicate possible solutions against inequitable supply.

  17. Impact of a fixed price system on the supply of institutional long-term care: a comparative study of Japanese and German metropolitan areas

    PubMed Central

    2014-01-01

    Background The need for institutional long-term care is increasing as the population ages and the pool of informal care givers declines. Care services are often limited when funding is controlled publicly. Fees for Japanese institutional care are publicly fixed and supply is short, particularly in expensive metropolitan areas. Those insured by universal long-term care insurance (LTCI) are faced with geographically inequitable access. The aim of this study was to examine the impact of a fixed price system on the supply of institutional care in terms of equity. Methods The data were derived from official statistics sources in both Japan and Germany, and a self-administered questionnaire was used in Japan in 2011. Cross-sectional multiple regression analyses were used to examine factors affecting bed supply of institutional/residential care in fixed price and free prices systems in Tokyo (Japan), and an individually-bargained price system in North Rhine-Westphalia (Germany). Variables relating to costs and needs were used to test hypotheses of cost-dependency and need-orientation of bed supply in each price system. Analyses were conducted using data both before and after the introduction of LTCI, and the results of each system were qualitatively compared. Results Total supply of institutional care in Tokyo under fixed pricing was found to be cost-dependent regarding capital costs and scale economies, and negatively related to need. These relationships have however weakened in recent years, possibly caused by political interventions under LTCI. Supply of residential care in Tokyo under free pricing was need-oriented and cost-dependent only regarding scale economies. Supply in North Rhine-Westphalia under individually bargained pricing was cost-independent and not negatively related to need. Conclusions Findings suggest that publicly funded fixed prices have a negative impact on geographically equitable supply of institutional care. The contrasting results of the non-fixed-price systems for Japanese residential care and German institutional care provide further theoretical supports for this and indicate possible solutions against inequitable supply. PMID:24485330

  18. Reduced bleeding events with subcutaneous administration of recombinant human factor IX in immune-tolerant hemophilia B dogs.

    PubMed

    Russell, Karen E; Olsen, Eva H N; Raymer, Robin A; Merricks, Elizabeth P; Bellinger, Dwight A; Read, Marjorie S; Rup, Bonita J; Keith, James C; McCarthy, Kyle P; Schaub, Robert G; Nichols, Timothy C

    2003-12-15

    Intravenous administration of recombinant human factor IX (rhFIX) acutely corrects the coagulopathy in hemophilia B dogs. To date, 20 of 20 dogs developed inhibitory antibodies to the xenoprotein, making it impossible to determine if new human FIX products, formulations, or methods of chronic administration can reduce bleeding frequency. Our goal was to determine whether hemophilia B dogs rendered tolerant to rhFIX would have reduced bleeding episodes while on sustained prophylactic rhFIX administered subcutaneously. Reproducible methods were developed for inducing tolerance to rhFIX in this strain of hemophilia B dogs, resulting in a significant reduction in the development of inhibitors relative to historical controls (5 of 12 versus 20 or 20, P <.001). The 7 of 12 tolerized hemophilia B dogs exhibited shortened whole blood clotting times (WBCTs), sustained detectable FIX antigen, undetectable Bethesda inhibitors, transient or no detectable antihuman FIX antibody titers by enzyme-linked immunosorbent assay (ELISA), and normal clearance of infused rhFIX. Tolerized hemophilia B dogs had 69% reduction in bleeding frequency in year 1 compared with nontolerized hemophilia B dogs (P =.0007). If proven safe in human clinical trials, subcutaneous rhFIX may provide an alternate approach to prophylactic therapy in selected patients with hemophilia B.

  19. A laser-deposition approach to compositional-spread discovery of materials on conventional sample sizes

    NASA Astrophysics Data System (ADS)

    Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.

    2005-01-01

    Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.

  20. Economic efficiency of environmental management system operation in industrial companies

    NASA Astrophysics Data System (ADS)

    Dukmasova, N.; Ershova, I.; Plastinina, I.; Boyarinov, A.

    2017-06-01

    The article examines the issue of the efficiency of the environmental management system (EMS) implementation in the Russian machine-building companies. The analysis showed that Russia clearly lags behind other developed and developing countries in terms of the number of ISO 14001 certified companies. According to the authors, the main cause of weak system implementation activity is attributed to the lack of interest in ISO 14001 certification on the Russian market. Five-year primary (field) research aimed at the analysis of the environmental priorities of the civilians suggests that the image component of the economic benefits ensures the increase in economic and financial performance of the company due to the increase in customers’ loyalty to the products of the EMS adopter. To quantify economic benefits obtained from EMS implementation, a methodological approach with regard to the image component and the decrease in semi-fixed costs due to the increase in the production scale has been developed. This approach has been tested in a machine-building electrical equipment manufacturer in Ekaterinburg. This approach applied to data processing yields the conclusion that EMS gives a good additional competitive advantage to its adopters.

  1. Interception of moving objects while walking in children with spastic hemiparetic cerebral palsy.

    PubMed

    Ricken, Annieck X C; Savelsbergh, G J P; Bennett, S J

    2007-01-15

    The purpose of the study was to examine the coordination of reaching and walking behaviour when children with Spastic Hemiparetic Cerebral Palsy (SHCP) intercept an approaching and hence externally-timed object. Using either the impaired or non-impaired arm, children intercepted a ball approaching from a fixed distance with one of three velocities. Each participant's initial starting position was scaled to their maximum walking velocity determined prior to testing; for the medium ball velocity, participants would arrive at the point of interception at the correct time if they walked with their maximum velocity. Children with SHCP adapted their reaching and walking behaviour to the different ball approach velocities. These adaptations were exhibited when using the impaired and non-impaired arm, and resulted in similar outcome performance irrespective of which arm was used. Still, children with SHCP found it necessary to increase trunk movement to compensate for the decreased elbow excursion and a decreased peak velocity of the impaired arm. Children with SHCP exhibited specific adaptations to their altered movement capabilities when performing a behaviourally-realistic task. The provision of an external timing constraint appeared to facilitate both reaching and walking movements and hence could represent a useful technique in rehabilitation.

  2. Optimisation of a propagation-based x-ray phase-contrast micro-CT system

    NASA Astrophysics Data System (ADS)

    Nesterets, Yakov I.; Gureyev, Timur E.; Dimmock, Matthew R.

    2018-03-01

    Micro-CT scanners find applications in many areas ranging from biomedical research to material sciences. In order to provide spatial resolution on a micron scale, these scanners are usually equipped with micro-focus, low-power x-ray sources and hence require long scanning times to produce high resolution 3D images of the object with acceptable contrast-to-noise. Propagation-based phase-contrast tomography (PB-PCT) has the potential to significantly improve the contrast-to-noise ratio (CNR) or, alternatively, reduce the image acquisition time while preserving the CNR and the spatial resolution. We propose a general approach for the optimisation of the PB-PCT imaging system. When applied to an imaging system with fixed parameters of the source and detector this approach requires optimisation of only two independent geometrical parameters of the imaging system, i.e. the source-to-object distance R 1 and geometrical magnification M, in order to produce the best spatial resolution and CNR. If, in addition to R 1 and M, the system parameter space also includes the source size and the anode potential this approach allows one to find a unique configuration of the imaging system that produces the required spatial resolution and the best CNR.

  3. Higgs mass prediction in the MSSM at three-loop level in a pure \\overline{{ {DR}}} context

    NASA Astrophysics Data System (ADS)

    Harlander, Robert V.; Klappert, Jonas; Voigt, Alexander

    2017-12-01

    The impact of the three-loop effects of order α _tα _s^2 on the mass of the light CP-even Higgs boson in the { {MSSM}} is studied in a pure \\overline{{ {DR}}} context. For this purpose, we implement the results of Kant et al. (JHEP 08:104, 2010) into the C++ module Himalaya and link it to FlexibleSUSY, a Mathematica and C++ package to create spectrum generators for BSM models. The three-loop result is compared to the fixed-order two-loop calculations of the original FlexibleSUSY and of FeynHiggs, as well as to the result based on an EFT approach. Aside from the expected reduction of the renormalization scale dependence with respect to the lower-order results, we find that the three-loop contributions significantly reduce the difference from the EFT prediction in the TeV-region of the { {SUSY}} scale {M_S}. Himalaya can be linked also to other two-loop \\overline{{ {DR}}} codes, thus allowing for the elevation of these codes to the three-loop level.

  4. Methyl chloride via oxyhydrochlorination of methane: A building block for chemicals and fuels from natural gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benson, R.L.; Brown, S.S.D.; Ferguson, S.P.

    1995-12-31

    The objectives of this program are to (a) develop a process for converting natural gas to methyl chloride via an oxyhydrochlorination route using highly selective, stable catalysts in a fixed-bed, (b) design a reactor capable of removing the large amount of heat generated in the process so as to control the reaction, (c) develop a recovery system capable of removing the methyl chloride from the product stream and (d) determine the economics and commercial viability of the process. The general approach has been as follows: (a) design and build a laboratory scale reactor, (b) define and synthesize suitable OHC catalystsmore » for evaluation, (c) select first generation OHC catalyst for Process Development Unit (PDU) trials, (d) design, construct and startup PDU, (e) evaluate packed bed reactor design, (f) optimize process, in particular, product recovery operations, (g) determine economics of process, (h) complete preliminary engineering design for Phase II and (i) make scale-up decision and formulate business plan for Phase II. Conclusions regarding process development and catalyst development are presented.« less

  5. Segmentation of dermoscopy images using wavelet networks.

    PubMed

    Sadri, Amir Reza; Zekri, Maryam; Sadri, Saeed; Gheissari, Niloofar; Mokhtari, Mojgan; Kolahdouzan, Farzaneh

    2013-04-01

    This paper introduces a new approach for the segmentation of skin lesions in dermoscopic images based on wavelet network (WN). The WN presented here is a member of fixed-grid WNs that is formed with no need of training. In this WN, after formation of wavelet lattice, determining shift and scale parameters of wavelets with two screening stage and selecting effective wavelets, orthogonal least squares algorithm is used to calculate the network weights and to optimize the network structure. The existence of two stages of screening increases globality of the wavelet lattice and provides a better estimation of the function especially for larger scales. R, G, and B values of a dermoscopy image are considered as the network inputs and the network structure formation. Then, the image is segmented and the skin lesions exact boundary is determined accordingly. The segmentation algorithm were applied to 30 dermoscopic images and evaluated with 11 different metrics, using the segmentation result obtained by a skilled pathologist as the ground truth. Experimental results show that our method acts more effectively in comparison with some modern techniques that have been successfully used in many medical imaging problems.

  6. An HMM model for coiled-coil domains and a comparison with PSSM-based predictions.

    PubMed

    Delorenzi, Mauro; Speed, Terry

    2002-04-01

    Large-scale sequence data require methods for the automated annotation of protein domains. Many of the predictive methods are based either on a Position Specific Scoring Matrix (PSSM) of fixed length or on a window-less Hidden Markov Model (HMM). The performance of the two approaches is tested for Coiled-Coil Domains (CCDs). The prediction of CCDs is used frequently, and its optimization seems worthwhile. We have conceived MARCOIL, an HMM for the recognition of proteins with a CCD on a genomic scale. A cross-validated study suggests that MARCOIL improves predictions compared to the traditional PSSM algorithm, especially for some protein families and for short CCDs. The study was designed to reveal differences inherent in the two methods. Potential confounding factors such as differences in the dimension of parameter space and in the parameter values were avoided by using the same amino acid propensities and by keeping the transition probabilities of the HMM constant during cross-validation. The prediction program and the databases are available at http://www.wehi.edu.au/bioweb/Mauro/Marcoil

  7. A System Approach to Navy Medical Education and Training. Appendix 44. Competency Curriculum for Dental Assistant.

    DTIC Science & Technology

    1974-08-31

    Prosthodontic Appliances - Partial Dentures .... ....... ....... .... 59 2. Removable Prosthodontic Appliances - Full Dentures ..... ........... . 60 3...PROCEDURES I This unit includes the following Modules: Number Title Page 1 Removable Prosthodontic Appliances - Partial Dentures 59. ......... ...... 2...Removable Prostbodontic Appliances - Full Dentures ..... ...... a .. .. .. . 60 3 Fixed Prosthodontic Appliances - Fixed Bridge. . 61 4 Mouthquard

  8. A Geometric Analysis of when Fixed Weighting Schemes Will Outperform Ordinary Least Squares

    ERIC Educational Resources Information Center

    Davis-Stober, Clintin P.

    2011-01-01

    Many researchers have demonstrated that fixed, exogenously chosen weights can be useful alternatives to Ordinary Least Squares (OLS) estimation within the linear model (e.g., Dawes, Am. Psychol. 34:571-582, 1979; Einhorn & Hogarth, Org. Behav. Human Perform. 13:171-192, 1975; Wainer, Psychol. Bull. 83:213-217, 1976). Generalizing the approach of…

  9. Air tankers in Southern California Fires...effectiveness in delivering retardants rated

    Treesearch

    Theodore G. Storey; Leon W. Cooley

    1967-01-01

    Eleven air attack experts were asked to rate 12 models of fixed-wing tankers and light helitankers for effectiveness ill delivering chemical fire retardants under 21 typical situations. They rated fixed-wing tankers as more effective in strong wind crosswinds, and downwind approaches, but helitankers as more effective in narrow canyons and on steep slopes. Certain...

  10. Mathematical analysis of frontal affinity chromatography in particle and membrane configurations.

    PubMed

    Tejeda-Mansir, A; Montesinos, R M; Guzmán, R

    2001-10-30

    The scaleup and optimization of large-scale affinity-chromatographic operations in the recovery, separation and purification of biochemical components is of major industrial importance. The development of mathematical models to describe affinity-chromatographic processes, and the use of these models in computer programs to predict column performance is an engineering approach that can help to attain these bioprocess engineering tasks successfully. Most affinity-chromatographic separations are operated in the frontal mode, using fixed-bed columns. Purely diffusive and perfusion particles and membrane-based affinity chromatography are among the main commercially available technologies for these separations. For a particular application, a basic understanding of the main similarities and differences between particle and membrane frontal affinity chromatography and how these characteristics are reflected in the transport models is of fundamental relevance. This review presents the basic theoretical considerations used in the development of particle and membrane affinity chromatography models that can be applied in the design and operation of large-scale affinity separations in fixed-bed columns. A transport model for column affinity chromatography that considers column dispersion, particle internal convection, external film resistance, finite kinetic rate, plus macropore and micropore resistances is analyzed as a framework for exploring further the mathematical analysis. Such models provide a general realistic description of almost all practical systems. Specific mathematical models that take into account geometric considerations and transport effects have been developed for both particle and membrane affinity chromatography systems. Some of the most common simplified models, based on linear driving-force (LDF) and equilibrium assumptions, are emphasized. Analytical solutions of the corresponding simplified dimensionless affinity models are presented. Particular methods for estimating the parameters that characterize the mass-transfer and adsorption mechanisms in affinity systems are described.

  11. Robustness of predator-prey models for confinement regime transitions in fusion plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, H.; Chapman, S. C.; Department of Mathematics and Statistics, University of Tromso

    2013-04-15

    Energy transport and confinement in tokamak fusion plasmas is usually determined by the coupled nonlinear interactions of small-scale drift turbulence and larger scale coherent nonlinear structures, such as zonal flows, together with free energy sources such as temperature gradients. Zero-dimensional models, designed to embody plausible physical narratives for these interactions, can help to identify the origin of enhanced energy confinement and of transitions between confinement regimes. A prime zero-dimensional paradigm is predator-prey or Lotka-Volterra. Here, we extend a successful three-variable (temperature gradient; microturbulence level; one class of coherent structure) model in this genre [M. A. Malkov and P. H. Diamond,more » Phys. Plasmas 16, 012504 (2009)], by adding a fourth variable representing a second class of coherent structure. This requires a fourth coupled nonlinear ordinary differential equation. We investigate the degree of invariance of the phenomenology generated by the model of Malkov and Diamond, given this additional physics. We study and compare the long-time behaviour of the three-equation and four-equation systems, their evolution towards the final state, and their attractive fixed points and limit cycles. We explore the sensitivity of paths to attractors. It is found that, for example, an attractive fixed point of the three-equation system can become a limit cycle of the four-equation system. Addressing these questions which we together refer to as 'robustness' for convenience is particularly important for models which, as here, generate sharp transitions in the values of system variables which may replicate some key features of confinement transitions. Our results help to establish the robustness of the zero-dimensional model approach to capturing observed confinement phenomenology in tokamak fusion plasmas.« less

  12. Neither fixed nor random: weighted least squares meta-analysis.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2015-06-15

    This study challenges two core conventional meta-analysis methods: fixed effect and random effects. We show how and explain why an unrestricted weighted least squares estimator is superior to conventional random-effects meta-analysis when there is publication (or small-sample) bias and better than a fixed-effect weighted average if there is heterogeneity. Statistical theory and simulations of effect sizes, log odds ratios and regression coefficients demonstrate that this unrestricted weighted least squares estimator provides satisfactory estimates and confidence intervals that are comparable to random effects when there is no publication (or small-sample) bias and identical to fixed-effect meta-analysis when there is no heterogeneity. When there is publication selection bias, the unrestricted weighted least squares approach dominates random effects; when there is excess heterogeneity, it is clearly superior to fixed-effect meta-analysis. In practical applications, an unrestricted weighted least squares weighted average will often provide superior estimates to both conventional fixed and random effects. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Universal scaling of potential energy functions describing intermolecular interactions. II. The halide-water and alkali metal-water interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Werhahn, Jasper C.; Akase, Dai; Xantheas, Sotiris S.

    2014-08-14

    The scaled versions of the newly introduced [S. S. Xantheas and J. C. Werhahn, J. Chem. Phys.141, 064117 (2014)] generalized forms of some popular potential energy functions (PEFs) describing intermolecular interactions – Mie, Lennard-Jones, Morse, and Buckingham exponential-6 – have been used to fit the ab initio relaxed approach paths and fixed approach paths for the halide-water, X -(H 2O), X = F, Cl, Br, I, and alkali metal-water, M +(H 2O), M = Li, Na, K, Rb, Cs, interactions. The generalized forms of those PEFs have an additional parameter with respect to the original forms and produce fits tomore » the ab initio data that are between one and two orders of magnitude better in the χ 2 than the original PEFs. They were found to describe both the long-range, minimum and repulsive wall of the respective potential energy surfaces quite accurately. Overall the 4-parameter extended Morse (eM) and generalized Buckingham exponential-6 (gBe-6) potentials were found to best fit the ab initio data for these two classes of ion-water interactions. Finally, the fitted values of the parameter of the (eM) and (gBe-6) PEFs that control the repulsive wall of the potential correlate remarkably well with the ionic radii of the halide and alkali metal ions.« less

  14. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    PubMed Central

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-01-01

    One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189

  15. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    PubMed

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  16. 14. UPPER SHOES, FIXED SHOES, ROLLER SHOES, CENTER WEB, AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. UPPER SHOES, FIXED SHOES, ROLLER SHOES, CENTER WEB, AND ROLLER BED PLATES. (Also includes a sheet index and a schedule of parts). American Bridge Company, Ambridge Plant No. 5, sheet no. 4, dated April 7, 1928, order no. F5073. For U.S. Steel Products Company, Pacific Coast Depot, order no. SF578. For Southern Pacific Company, order no. 8873-P-28746. various scales. - Napa River Railroad Bridge, Spanning Napa River, east of Soscol Avenue, Napa, Napa County, CA

  17. Electrophoretic cell separation by means of microspheres

    NASA Technical Reports Server (NTRS)

    Smolka, A. J. K.; Nerren, B. H.; Margel, S.; Rembaum, A.

    1979-01-01

    The electrophoretic mobility of fixed human erythrocytes immunologically labeled with poly(vinylpyridine) or poly(glutaraldehyde) microspheres was reduced by approximately 40%. This observation was utilized in preparative scale electrophoretic separations of fixed human and turkey erythrocytes, the mobilities of which under normal physiological conditions do not differ sufficiently to allow their separation by continuous flow electrophoresis. We suggest that resolution in the electrophoretic separation of cell subpopulations, currently limited by finite and often overlapping mobility distributions, may be significantly enhanced by immunospecific labeling of target populations using microspheres.

  18. Financial Management of a Large Multi-site Randomized Clinical Trial

    PubMed Central

    Sheffet, Alice J.; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E.; Longbottom, Mary E.; Howard, Virginia J.; Marler, John R.; Brott, Thomas G.

    2014-01-01

    Background The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years’ funding ($21,112,866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2,500 randomized participants at 40 sites. Aims Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Methods Projections of the original grant’s fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant’s fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Results Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2,500 targeted sample size, 138 (5.5%) were randomized during the first five years and 1,387 (55.5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13,845) of the projected per-patient costs ($152,992) of the fixed model. Conclusions Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. PMID:24661748

  19. Financial management of a large multisite randomized clinical trial.

    PubMed

    Sheffet, Alice J; Flaxman, Linda; Tom, MeeLee; Hughes, Susan E; Longbottom, Mary E; Howard, Virginia J; Marler, John R; Brott, Thomas G

    2014-08-01

    The Carotid Revascularization Endarterectomy versus Stenting Trial (CREST) received five years' funding ($21 112 866) from the National Institutes of Health to compare carotid stenting to surgery for stroke prevention in 2500 randomized participants at 40 sites. Herein we evaluate the change in the CREST budget from a fixed to variable-cost model and recommend strategies for the financial management of large-scale clinical trials. Projections of the original grant's fixed-cost model were compared to the actual costs of the revised variable-cost model. The original grant's fixed-cost budget included salaries, fringe benefits, and other direct and indirect costs. For the variable-cost model, the costs were actual payments to the clinical sites and core centers based upon actual trial enrollment. We compared annual direct and indirect costs and per-patient cost for both the fixed and variable models. Differences between clinical site and core center expenditures were also calculated. Using a variable-cost budget for clinical sites, funding was extended by no-cost extension from five to eight years. Randomizing sites tripled from 34 to 109. Of the 2500 targeted sample size, 138 (5·5%) were randomized during the first five years and 1387 (55·5%) during the no-cost extension. The actual per-patient costs of the variable model were 9% ($13 845) of the projected per-patient costs ($152 992) of the fixed model. Performance-based budgets conserve funding, promote compliance, and allow for additional sites at modest additional cost. Costs of large-scale clinical trials can thus be reduced through effective management without compromising scientific integrity. © 2014 The Authors. International Journal of Stroke © 2014 World Stroke Organization.

  20. Fixed base simulator study of an externally blown flap STOL transport airplane during approach and landing

    NASA Technical Reports Server (NTRS)

    Grantham, W. D.; Nguyen, L. T.; Patton, J. M., Jr.; Deal, P. L.; Champine, R. A.; Carter, C. R.

    1972-01-01

    A fixed-base simulator study was conducted to determine the flight characteristics of a representative STOL transport having a high wing and equipped with an external-flow jet flap in combination with four high-bypass-ratio fan-jet engines during the approach and landing. Real-time digital simulation techniques were used. The computer was programed with equations of motion for six degrees of freedom and the aerodynamic inputs were based on measured wind-tunnel data. A visual display of a STOL airport was provided for simulation of the flare and touchdown characteristics. The primary piloting task was an instrument approach to a breakout at a 200-ft ceiling with a visual landing.

  1. Planning satellite communication services and spectrum-orbit utilization

    NASA Technical Reports Server (NTRS)

    Sawitz, P. H.

    1982-01-01

    The relationship between approaches to planning satellite communication services and spectrum-orbit utilization is considered, with emphasis on the fixed-satellite and the broadcasting-satellite services. It is noted that there are several possible approaches to planning space services, differing principally in the rigidity with which technical parameters are prescribed, in the time for which a plan remains in force, and in the procedures adopted for implementation and modifications. With some planning approaches, spectrum-orbit utilization is fixed at the time the plan is made. Others provide for greater flexibility by making it possible to postpone some decisions on technical parameters. In addition, the two political questions of what is equitable access and how it can be guaranteed in practice play an important role.

  2. Networked high-speed auroral observations combined with radar measurements for multi-scale insights

    NASA Astrophysics Data System (ADS)

    Hirsch, M.; Semeter, J. L.

    2015-12-01

    Networks of ground-based instruments to study terrestrial aurora for the purpose of analyzing particle precipitation characteristics driving the aurora have been established. Additional funding is pouring into future ground-based auroral observation networks consisting of combinations of tossable, portable, and fixed installation ground-based legacy equipment. Our approach to this problem using the High Speed Tomography (HiST) system combines tightly-synchronized filtered auroral optical observations capturing temporal features of order 10 ms with supporting measurements from incoherent scatter radar (ISR). ISR provides a broader spatial context up to order 100 km laterally on one minute time scales, while our camera field of view (FOV) is chosen to be order 10 km at auroral altitudes in order to capture 100 m scale lateral auroral features. The dual-scale observations of ISR and HiST fine-scale optical observations may be coupled through a physical model using linear basis functions to estimate important ionospheric quantities such as electron number density in 3-D (time, perpendicular and parallel to the geomagnetic field).Field measurements and analysis using HiST and PFISR are presented from experiments conducted at the Poker Flat Research Range in central Alaska. Other multiscale configuration candidates include supplementing networks of all-sky cameras such as THEMIS with co-locations of HiST-like instruments to fuse wide FOV measurements with the fine-scale HiST precipitation characteristic estimates. Candidate models for this coupling include GLOW and TRANSCAR. Future extensions of this work may include incorporating line of sight total electron count estimates from ground-based networks of GPS receivers in a sensor fusion problem.

  3. Long-Term Stability of WC-C Peritectic Fixed Point

    NASA Astrophysics Data System (ADS)

    Khlevnoy, B. B.; Grigoryeva, I. A.

    2015-03-01

    The tungsten carbide-carbon peritectic (WC-C) melting transition is an attractive high-temperature fixed point with a temperature of . Earlier investigations showed high repeatability, small melting range, low sensitivity to impurities, and robustness of WC-C that makes it a prospective candidate for the highest fixed point of the temperature scale. This paper presents further study of the fixed point, namely the investigation of the long-term stability of the WC-C melting temperature. For this purpose, a new WC-C cell of the blackbody type was built using tungsten powder of 99.999 % purity. The stability of the cell was investigated during the cell aging for 50 h at the cell working temperature that tooks 140 melting/freezing cycles. The method of investigation was based on the comparison of the WC-C tested cell with a reference Re-C fixed-point cell that reduces an influence of the probable instability of a radiation thermometer. It was shown that after the aging period, the deviation of the WC-C cell melting temperature was with an uncertainty of.

  4. Development of Carbon Dioxide Removal Systems for Advanced Exploration Systems

    NASA Technical Reports Server (NTRS)

    Knox, James C.; Trinh, Diep; Gostowski, Rudy; King, Eric; Mattox, Emily M.; Watson, David; Thomas, John

    2012-01-01

    "NASA's Advanced Exploration Systems (AES) program is pioneering new approaches for rapidly developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit" (NASA 2012). These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must not only blast out of earth's gravity well as during the Apollo moon missions, but also launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by seeking more robust pelletized sorbents, evaluating structured sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach, which is then implemented in a full-scale integrated atmosphere revitalization test. This paper describes the carbon dioxide (CO2) removal hardware design and sorbent screening and characterization effort in support of the Atmosphere Resource Recovery and Environmental Monitoring (ARREM) project within the AES program. A companion paper discusses development of atmosphere revitalization models and simulations for this project.

  5. Developments in Atmosphere Revitalization Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Knox, James C.; Kittredge, Kenneth; Xoker, Robert F.; Cummings, Ramona; Gomez, Carlos F.

    2012-01-01

    "NASA's Advanced Exploration Systems (AES) program is pioneering new approaches for rapidly developing prototype systems, demonstrating key capabilities, and validating operational concepts for future human missions beyond Earth orbit" (NASA 2012). These forays beyond the confines of earth's gravity will place unprecedented demands on launch systems. They must not only blast out of earth's gravity well as during the Apollo moon missions, but also launch the supplies needed to sustain a crew over longer periods for exploration missions beyond earth's moon. Thus all spacecraft systems, including those for the separation of metabolic carbon dioxide and water from a crewed vehicle, must be minimized with respect to mass, power, and volume. Emphasis is also placed on system robustness both to minimize replacement parts and ensure crew safety when a quick return to earth is not possible. Current efforts are focused on improving the current state-of-the-art systems utilizing fixed beds of sorbent pellets by evaluating structured sorbents, seeking more robust pelletized sorbents, and examining alternate bed configurations to improve system efficiency and reliability. These development efforts combine testing of sub-scale systems and multi-physics computer simulations to evaluate candidate approaches, select the best performing options, and optimize the configuration of the selected approach, which is then implemented in a full-scale integrated atmosphere revitalization test. This paper describes the development of atmosphere revitalization models and simulations. A companion paper discusses the hardware design and sorbent screening and characterization effort in support of the Atmosphere Revitalization Recovery and Environmental Monitoring (ARREM) project within the AES program.

  6. Kinota: An Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring

    NASA Astrophysics Data System (ADS)

    Miles, B.; Chepudira, K.; LaBar, W.

    2017-12-01

    The Open Geospatial Consortium (OGC) SensorThings API (STA) specification, ratified in 2016, is a next-generation open standard for enabling real-time communication of sensor data. Building on over a decade of OGC Sensor Web Enablement (SWE) Standards, STA offers a rich data model that can represent a range of sensor and phenomena types (e.g. fixed sensors sensing fixed phenomena, fixed sensors sensing moving phenomena, mobile sensors sensing fixed phenomena, and mobile sensors sensing moving phenomena) and is data agnostic. Additionally, and in contrast to previous SWE standards, STA is developer-friendly, as is evident from its convenient JSON serialization, and expressive OData-based query language (with support for geospatial queries); with its Message Queue Telemetry Transport (MQTT), STA is also well-suited to efficient real-time data publishing and discovery. All these attributes make STA potentially useful for use in environmental monitoring sensor networks. Here we present Kinota(TM), an Open-Source NoSQL implementation of OGC SensorThings for large-scale high-resolution real-time environmental monitoring. Kinota, which roughly stands for Knowledge from Internet of Things Analyses, relies on Cassandra its underlying data store, which is a horizontally scalable, fault-tolerant open-source database that is often used to store time-series data for Big Data applications (though integration with other NoSQL or rational databases is possible). With this foundation, Kinota can scale to store data from an arbitrary number of sensors collecting data every 500 milliseconds. Additionally, Kinota architecture is very modular allowing for customization by adopters who can choose to replace parts of the existing implementation when desirable. The architecture is also highly portable providing the flexibility to choose between cloud providers like azure, amazon, google etc. The scalable, flexible and cloud friendly architecture of Kinota makes it ideal for use in next-generation large-scale and high-resolution real-time environmental monitoring networks used in domains such as hydrology, geomorphology, and geophysics, as well as management applications such as flood early warning, and regulatory enforcement.

  7. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces

    NASA Astrophysics Data System (ADS)

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-01

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  8. The truncated conjugate gradient (TCG), a non-iterative/fixed-cost strategy for computing polarization in molecular dynamics: Fast evaluation of analytical forces.

    PubMed

    Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip

    2017-10-28

    In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.

  9. An approach for fixed coefficient RNS-based FIR filter

    NASA Astrophysics Data System (ADS)

    Srinivasa Reddy, Kotha; Sahoo, Subhendu Kumar

    2017-08-01

    In this work, an efficient new modular multiplication method for {2k-1, 2k, 2k+1-1} moduli set is proposed to implement a residue number system (RNS)-based fixed coefficient finite impulse response filter. The new multiplication approach reduces the number of partial products by using pre-loaded product block. The reduction in partial products with the proposed modular multiplication improves the clock frequency and reduces the area and power as compared with the conventional modular multiplication. Further, the present approach eliminates a binary number to residue number converter circuit, which is usually needed at the front end of RNS-based system. In this work, two fixed coefficient filter architectures with the new modular multiplication approach are proposed. The filters are implemented using Verilog hardware description language. The United Microelectronics Corporation 90 nm technology library has been used for synthesis and the results area, power and delay are obtained with the help of Cadence register transfer level compiler. The power delay product (PDP) is also considered for performance comparison among the proposed filters. One of the proposed architecture is found to improve PDP gain by 60.83% as compared with the filter implemented with conventional modular multiplier. The filters functionality is validated with the help of Altera DSP Builder.

  10. Fixing the Sky: Why the History of Climate Engineering Matters (Invited)

    NASA Astrophysics Data System (ADS)

    Fleming, J. R.

    2010-12-01

    What shall we do about climate change? Is a planetary-scale technological fix possible or desirable? The joint AMS and AGU “Policy Statement on Geoengineering the Climate System” (2009) recommends “Coordinated study of historical, ethical, legal, and social implications of geoengineering that integrates international, interdisciplinary, and intergenerational issues and perspectives and includes lessons from past efforts to modify weather and climate.” I wrote Fixing the Sky: The Checkered History of Weather and Climate Control (Columbia University Press, 2010) with this recommendation in mind, to be fully accessible to scientists, policymakers, and the general public, while meeting or exceeding the scholarly standards of history. It is my intent, with this book, to bring history to bear on public policy issues.

  11. Wave Driven Fluid-Sediment Interactions over Rippled Beds

    NASA Astrophysics Data System (ADS)

    Foster, Diane; Nichols, Claire

    2008-11-01

    Empirical investigations relating vortex shedding over rippled beds to oscillatory flows date back to Darwin in 1883. Observations of the shedding induced by oscillating forcing over fixed beds have shown vortical structures to reach maximum strength at 90 degrees when the horizontal velocity is largest. The objective of this effort is to examine the vortex generation and ejection over movable rippled beds in a full-scale, free surface wave environment. Observations of the two-dimensional time-varying velocity field over a movable sediment bed were obtained with a submersible Particle Image Velocimetry (PIV) system in two wave flumes. One wave flume was full scale and had a natural sand bed and the other flume had an artificial sediment bed with a specific gravity of 1.6. Full scale observations over an irregularly rippled bed show that the vortices generated during offshore directed flow over the steeper bed form slope were regularly ejected into the water column and were consistent with conceptual models of the oscillatory flow over a backward facing step. The results also show that vortices remain coherent during ejection when the background flow stalls (i.e. both the velocity and acceleration temporarily approach zero). These results offer new insight into fluid sediment interaction over rippled beds.

  12. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  13. SU(2) lattice gluon propagator: Continuum limit, finite-volume effects, and infrared mass scale m{sub IR}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornyakov, V. G.; Mitrjushkin, V. K.; Mueller-Preussker, M.

    2010-03-01

    We study the scaling behavior and finite (physical) volume effects as well as the Gribov copy dependence of the SU(2) Landau gauge gluon propagator on the lattice. Our physical lattice sizes range from (3.0 fm){sup 4} to (7.3 fm){sup 4}. Considering lattices with decreasing lattice spacing but fixed physical volume we confirm (nonperturbative) multiplicative renormalizability and the approach to the continuum limit for the renormalized gluon propagator D{sub ren}(p) at momenta |p| > or approx. 0.6 GeV. The finite-volume effects and Gribov copy influence turn out small in this region. On the contrary, in the deeper infrared we found themore » Gribov copy influence strong and finite-volume effects, which still require special attention. The gluon propagator does not seem to be consistent with a simple polelike behavior {approx}(p{sup 2}+m{sub g}{sup 2}){sup -1} for momenta |p| < or approx. 0.6 GeV. Instead, a Gaussian-type fit works very well in this region. From its width - for a physical volume (5.0 fm){sup 4} - we estimate a corresponding infrared (mass) scale to be m{sub IR{approx}}0.7 GeV.« less

  14. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    NASA Astrophysics Data System (ADS)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second (spherical) model, it cuts off at a temporal range. Having fitted the model, multiple realisations were generated; the random effects were simulated by specifying a covariance matrix for the simulated values, with the estimated parameters. The Cholesky factorisation of the covariance matrix was computed and realizations of the random component of the model generated by pre-multiplying a vector of iid standard Gaussian variables by the lower triangular factor. The resulting random variate was added to the mean value computed from the fixed effects, and the result back-transformed to the original scale of the measurement. Realistic simulations result from approach described above. Background exploratory data analysis was undertaken on 20-day sets of 30-minute buoy data, selected from days 5-24 of months January, April, July, October, 2011, to elucidate daily to weekly variations, and to keep numerical analysis tractable computationally. Work remains to be undertaken to develop suitable models for synthetic directional data. We suggest that the general principles of the method will have applications in other geomorphological modelling endeavours requiring time series of stochastically variable environmental parameters.

  15. A reduced order, test verified component mode synthesis approach for system modeling applications

    NASA Astrophysics Data System (ADS)

    Butland, Adam; Avitabile, Peter

    2010-05-01

    Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.

  16. Is the SMART approach better than other treatment approaches for prevention of asthma exacerbations? A meta-analysis.

    PubMed

    Agarwal, R; Khan, A; Aggarwal, A N; Gupta, D

    2009-12-01

    The combination of inhaled corticosteroids (ICS) and long-acting beta2 agonists (LABA) has been used as a single inhaler both for maintenance and reliever therapy in asthma, the SMART approach. The administration of additional CS with each reliever inhalation in response to symptoms is expected to provide better control of airway inflammation. The aim of this meta-analysis was to evaluate the efficacy and safety of the SMART approach versus other approaches in the management of asthma in preventing asthma exacerbations. We searched the MEDLINE and EMBASE databases for studies that have reported exacerbations in the SMART group versus the control group. We calculated the odds ratio (OR) and 95% confidence intervals (CI) to assess the exacerbations in the two groups and pooled the results using a random-effects model. Our search yielded eight studies. The use of SMART approach compared to fixed-dose ICS-LABA combination significantly decreased the odds of a severe exacerbation (OR 0.65; 95% CI, 0.53-0.80) and severe exacerbation requiring hospitalization/ER treatment (OR 0.69; 95% CI, 058-0.83). The use of SMART approach compared to fixed-dose ICS also significantly decreased the odds of a severe exacerbation (OR 0.52; 95% CI, 0.45-0.61) and severe exacerbation requiring medical intervention (OR 0.52; 95% CI, 0.42-0.65). The occurrence of adverse events was similar in the two groups. There was some evidence of statistical heterogeneity. The SMART approach using formoterol-budesonide is superior in preventing exacerbations when compared to traditional therapy with fixed dose ICS or ICS-LABA combination without any increase in adverse events.

  17. Costs and Their Assessment to Users of a Medical Library, Part III: Allocating Fixed Joint Costs.

    ERIC Educational Resources Information Center

    Bres, E.; And Others

    Part III of the study describes a model for completing the cost assessment (justification) process by accounting for the fixed joint costs; a "fair" and equitable mechanism is developed in the context of game-theoretic approach. An n-person game is constructed in which the "players" are the institutions served by the library,…

  18. Can Response Speed Be Fixed Experimentally, and Does This Lead to Unconfounded Measurement of Ability?

    ERIC Educational Resources Information Center

    Bolsinova, Maria; Tijmstra, Jesper

    2015-01-01

    Goldhammer (this issue) proposes an interesting approach to dealing with the speededness of item responses. Rather than modeling speed as a latent variable that varies from person to person, he proposes to use experimental conditions that are expected to fix the speed, thereby eliminating individual differences on this dimension in order to make…

  19. Finding and Fixing Mistakes: Do Checklists Work for Clinicians with Different Levels of Experience?

    ERIC Educational Resources Information Center

    Sibbald, Matthew; De Bruin, Anique B. H.; van Merrienboer, Jeroen J. G.

    2014-01-01

    Checklists that focus attention on key variables might allow clinicians to find and fix their mistakes. However, whether this approach can be applied to clinicians of varying degrees of expertise is unclear. Novice and expert clinicians vary in their predominant reasoning processes and in the types of errors they commit. We studied 44 clinicians…

  20. Revisiting Fixed- and Random-Effects Models: Some Considerations for Policy-Relevant Education Research

    ERIC Educational Resources Information Center

    Clarke, Paul; Crawford, Claire; Steele, Fiona; Vignoles, Anna

    2015-01-01

    The use of fixed (FE) and random effects (RE) in two-level hierarchical linear regression is discussed in the context of education research. We compare the robustness of FE models with the modelling flexibility and potential efficiency of those from RE models. We argue that the two should be seen as complementary approaches. We then compare both…

  1. Underwater Light Regimes in Rivers from Multiple Measurement Approaches

    NASA Astrophysics Data System (ADS)

    Gardner, J.; Ensign, S.; Houser, J.; Doyle, M.

    2017-12-01

    Underwater light regimes are complex over space and time. Light in rivers is less understood compared to other aquatic systems, yet light is often the limiting resource and a fundamental control of many biological and physical processes in riverine systems. We combined multiple measurement approaches (fixed-site and flowpath) to understand underwater light regimes. We measured vertical light profiles over time (fixed-site) with stationary buoys and over space and time (flowpath) with Lagrangian neutrally buoyant sensors in two different large US rivers; the Upper Mississippi River in Wisconsin, USA and the Neuse River in North Carolina, USA. Fixed site data showed light extinction coefficients, and therefore the depth of the euphotic zone, varied up to three-fold within a day. Flowpath data revealed the stochastic nature of light regimes from the perspective of a neutrally buoyant particle as it moves throughout the water column. On average, particles were in the euphotic zone between 15-50% of the time. Combining flowpath and fixed-site data allowed spatial disaggregation of a river reach to determine if changes in the light regime were due to space or time as well as development of a conceptual model of the dynamic euphotic zone of rivers.

  2. An Item Response Unfolding Model for Graphic Rating Scales

    ERIC Educational Resources Information Center

    Liu, Ying

    2009-01-01

    The graphic rating scale, a measurement tool used in many areas of psychology, usually takes a form of a fixed-length line segment, with both ends bounded and labeled as extreme responses. The raters mark somewhere on the line, and the length of the line segment from one endpoint to the mark is taken as the measure. An item response unfolding…

  3. A computerized bucking trainer for optimally bucking hardwoods

    Treesearch

    Scott Noble; Blair Orr; Philip A. Araman; John Baumgras; James B. Pickens

    2000-01-01

    The bucking of hardwood stems constitutes the initial manufacturing decision for hardwood lumber production. Each bucking cut creates a log of fixed grade and scale. The grade and scale of each log created by the bucker determines the quantity and quality of potential lumber, which determines the value of the log within a given market. As a result, bucking decisions...

  4. Planning Alternative Organizational Frameworks For a Large Scale Educational Telecommunications System Served by Fixed/Broadcast Satellites. Memorandum Number 73/3.

    ERIC Educational Resources Information Center

    Walkmeyer, John

    Considerations relating to the design of organizational structures for development and control of large scale educational telecommunications systems using satellites are explored. The first part of the document deals with four issues of system-wide concern. The first is user accessibility to the system, including proximity to entry points, ability…

  5. Transverse beam dynamics in non-linear Fixed Field Alternating Gradient accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haj, Tahar M.; Meot, F.

    2016-03-02

    In this paper, we present some aspects of the transverse beam dynamics in Fixed Field Ring Accelerators (FFRA): we start from the basic principles in order to derive the linearized transverse particle equations of motion for FFRA, essentially FFAGs and cyclotrons are considered here. This is a simple extension of a previous work valid for linear lattices that we generalized by including the bending terms to ensure its correctness for FFAG lattice. The space charge term (contribution of the internal coulombian forces of the beam) is contained as well, although it is not discussed here. The emphasis is on themore » scaling FFAG type: a collaboration work is undertaken in view of better understanding the properties of the 150 MeV scaling FFAG at KURRI in Japan, and progress towards high intensity operation. Some results of the benchmarking work between different codes are presented. Analysis of certain type of field imperfections revealed some interesting features about this machine that explain some of the experimental results and generalize the concept of a scaling FFAG to a non-scaling one for which the tune variations obey a well-defined law.« less

  6. Isotope Mass Scaling of Turbulence and Transport

    NASA Astrophysics Data System (ADS)

    McKee, George; Yan, Zheng; Gohil, Punit; Luce, Tim; Rhodes, Terry

    2017-10-01

    The dependence of turbulence characteristics and transport scaling on the fuel ion mass has been investigated in a set of hydrogen (A = 1) and deuterium (A = 2) plasmas on DIII-D. Normalized energy confinement time (B *τE) is two times lower in hydrogen (H) plasmas compare to similar deuterium (D) plasmas. Dimensionless parameters other than ion mass (A) , including ρ*, q95, Te /Ti , βN, ν*, and Mach number were maintained nearly fixed. Matched profiles of electron density, electron and ion temperature, and toroidal rotation were well matched. The normalized turbulence amplitude (ñ / n) is approximately twice as large in H as in D, which may partially explain the increased transport and reduced energy confinement time. Radial correlation lengths of low-wavenumber density turbulence in hydrogen are similar to or slightly larger than correlation lengths in the deuterium plasmas and generally scale with the ion gyroradius, which were maintained nearly fixed in this dimensionless scan. Predicting energy confinement in D-T burning plasmas requires an understanding of the large and beneficial isotope scaling of transport. Supported by USDOE under DE-FG02-08ER54999 and DE-FC02-04ER54698.

  7. Employment insecurity and employees' health in Denmark.

    PubMed

    Cottini, Elena; Ghinetti, Paolo

    2018-02-01

    We use register data for Denmark (IDA) merged with the Danish Work Environment Cohort Survey (1995, 2000, and 2005) to estimate the effect of perceived employment insecurity on perceived health for a sample of Danish employees. We consider two health measures from the SF-36 Health Survey Instrument: a vitality scale for general well-being and a mental health scale. We first analyse a summary measure of employment insecurity. Instrumental variables-fixed effects estimates that use firm workforce changes as a source of exogenous variation show that 1 additional dimension of insecurity causes a shift from the median to the 25th percentile in the mental health scale and to the 30th in that of energy/vitality. It also increases by about 6 percentage points the probability to develop severe mental health problems. Looking at single insecurity dimensions by naïve fixed effects, uncertainty associated with the current job is important for mental health. Employability has a sizeable relationship with health and is the only insecurity dimension that matters for the energy and vitality scale. Danish employees who fear involuntary firm internal mobility experience worse mental health. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Gauge coupling unification and nonequilibrium thermal dark matter.

    PubMed

    Mambrini, Yann; Olive, Keith A; Quevillon, Jérémie; Zaldívar, Bryan

    2013-06-14

    We study a new mechanism for the production of dark matter in the Universe which does not rely on thermal equilibrium. Dark matter is populated from the thermal bath subsequent to inflationary reheating via a massive mediator whose mass is above the reheating scale T(RH). To this end, we consider models with an extra U(1) gauge symmetry broken at some intermediate scale (M(int) ≃ 10(10)-10(12) GeV). We show that not only does the model allow for gauge coupling unification (at a higher scale associated with grand unification) but it can provide a dark matter candidate which is a standard model singlet but charged under the extra U(1). The intermediate scale gauge boson(s) which are predicted in several E6/SO(10) constructions can be a natural mediator between dark matter and the thermal bath. We show that the dark matter abundance, while never having achieved thermal equilibrium, is fixed shortly after the reheating epoch by the relation T(RH)(3)/M(int)(4). As a consequence, we show that the unification of gauge couplings which determines M(int) also fixes the reheating temperature, which can be as high as T(RH) ≃ 10(11) GeV.

  9. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit

    PubMed Central

    Hoover, Stephen; Jackson, Eric V.; Paul, David; Locke, Robert

    2016-01-01

    Summary Background Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Objective Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. Methods We used five years of retrospective daily NICU census data for model development (January 2008 – December 2012, N=1827 observations) and one year of data for validation (January – December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. Results The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Conclusions Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning. PMID:27437040

  10. Time Series Analysis for Forecasting Hospital Census: Application to the Neonatal Intensive Care Unit.

    PubMed

    Capan, Muge; Hoover, Stephen; Jackson, Eric V; Paul, David; Locke, Robert

    2016-01-01

    Accurate prediction of future patient census in hospital units is essential for patient safety, health outcomes, and resource planning. Forecasting census in the Neonatal Intensive Care Unit (NICU) is particularly challenging due to limited ability to control the census and clinical trajectories. The fixed average census approach, using average census from previous year, is a forecasting alternative used in clinical practice, but has limitations due to census variations. Our objectives are to: (i) analyze the daily NICU census at a single health care facility and develop census forecasting models, (ii) explore models with and without patient data characteristics obtained at the time of admission, and (iii) evaluate accuracy of the models compared with the fixed average census approach. We used five years of retrospective daily NICU census data for model development (January 2008 - December 2012, N=1827 observations) and one year of data for validation (January - December 2013, N=365 observations). Best-fitting models of ARIMA and linear regression were applied to various 7-day prediction periods and compared using error statistics. The census showed a slightly increasing linear trend. Best fitting models included a non-seasonal model, ARIMA(1,0,0), seasonal ARIMA models, ARIMA(1,0,0)x(1,1,2)7 and ARIMA(2,1,4)x(1,1,2)14, as well as a seasonal linear regression model. Proposed forecasting models resulted on average in 36.49% improvement in forecasting accuracy compared with the fixed average census approach. Time series models provide higher prediction accuracy under different census conditions compared with the fixed average census approach. Presented methodology is easily applicable in clinical practice, can be generalized to other care settings, support short- and long-term census forecasting, and inform staff resource planning.

  11. Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.

    PubMed

    Drinkwater, Benjamin; Charleston, Michael A

    2014-01-01

    Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.

  12. Evaluation of Bare Ground on Rangelands using Unmanned Aerial Vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert P. Breckenridge; Maxine Dakins

    2011-01-01

    Attention is currently being given to methods that assess the ecological condition of rangelands throughout the United States. There are a number of different indicators that assess ecological condition of rangelands. Bare Ground is being considered by a number of agencies and resource specialists as a lead indicator that can be evaluated over a broad area. Traditional methods of measuring bare ground rely on field technicians collecting data along a line transect or from a plot. Unmanned aerial vehicles (UAVs) provide an alternative to collecting field data, can monitor a large area in a relative short period of time, andmore » in many cases can enhance safety and time required to collect data. In this study, both fixed wing and helicopter UAVs were used to measure bare ground in a sagebrush steppe ecosystem. The data were collected with digital imagery and read using the image analysis software SamplePoint. The approach was tested over seven different plots and compared against traditional field methods to evaluate accuracy for assessing bare ground. The field plots were located on the Idaho National Laboratory (INL) site west of Idaho Falls, Idaho in locations where there is very little disturbance by humans and the area is grazed only by wildlife. The comparison of fixed-wing and helicopter UAV technology against field estimates shows good agreement for the measurement of bare ground. This study shows that if a high degree of detail and data accuracy is desired, then a helicopter UAV may be a good platform. If the data collection objective is to assess broad-scale landscape level changes, then the collection of imagery with a fixed-wing system is probably more appropriate.« less

  13. Large-N kinetic theory for highly occupied systems

    NASA Astrophysics Data System (ADS)

    Walz, R.; Boguslavski, K.; Berges, J.

    2018-06-01

    We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.

  14. Reduction of a metapopulation genetic model to an effective one-island model

    NASA Astrophysics Data System (ADS)

    Parra-Rojas, César; McKane, Alan J.

    2018-04-01

    We explore a model of metapopulation genetics which is based on a more ecologically motivated approach than is frequently used in population genetics. The size of the population is regulated by competition between individuals, rather than by artificially imposing a fixed population size. The increased complexity of the model is managed by employing techniques often used in the physical sciences, namely exploiting time-scale separation to eliminate fast variables and then constructing an effective model from the slow modes. We analyse this effective model and show that the predictions for the probability of fixation of the alleles and the mean time to fixation agree well with those found from numerical simulations of the original model. Contribution to the Focus Issue Evolutionary Modeling and Experimental Evolution edited by José Cuesta, Joachim Krug and Susanna Manrubia.

  15. Tempel 1 Composite Map

    NASA Image and Video Library

    2005-09-06

    This Tempel 1 image was built up from scaling images from NASA Deep Impact to 5 meters/pixel and aligned to fixed points. Each image at closer range replaced equivalent locations observed at a greater distance.

  16. Gravity Duals of Lifshitz-Like Fixed Points

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kachru, Shamit; /Stanford U., Phys. Dept. /SLAC; Liu, Xiao

    2008-11-05

    We find candidate macroscopic gravity duals for scale-invariant but non-Lorentz invariant fixed points, which do not have particle number as a conserved quantity. We compute two-point correlation functions which exhibit novel behavior relative to their AdS counterparts, and find holographic renormalization group flows to conformal field theories. Our theories are characterized by a dynamical critical exponent z, which governs the anisotropy between spatial and temporal scaling t {yields} {lambda}{sup z}t, x {yields} {lambda}x; we focus on the case with z = 2. Such theories describe multicritical points in certain magnetic materials and liquid crystals, and have been shown to arisemore » at quantum critical points in toy models of the cuprate superconductors. This work can be considered a small step towards making useful dual descriptions of such critical points.« less

  17. [Permanent disability and the insurance estimation process].

    PubMed

    Soumah, M M; Mbaye, I; Ndiaye, M; Bah, H; Gaye Fall, M C; Sow, M L

    2006-01-01

    The casualties are indemnified according to two processes. First by transaction on rate proposition of insurance physicians, and the second process on rate proposition by a medical expert assigned by law-court. Indemnification scale failure justifies the Interafrican Conference of insurance Markets code adoption. Six insurance societies and the Automotive Guarantee Fund were debtors. Only 627 victims had been indemnified between 1986 and 2003. Expert valuations done at forensic medicine service were the support of the investigation. Inquired parameters were insurance societies, regulation type, aftermaths and the retained prejudices. The data collected on computer card have been analyzed by software Epi Info. The partial permanent inabilities fixed since its adoption differ to inabilities fixed before this adoption. Transaction process concerned 567 victims (90.4%). Sixty victims were indemnified by judicial way. According to process type, the rates fixed in judicial process were 61.6% middle permanent partial inabilities. After 1997, there have observed a decrease in the high and middle permanent partial inabilities in the two processes. The appreciation of the pretium doloris is more subjective but must repair the aftermaths. The middle pretium was majority in the two processes, before and after 1997 with a high decrease of the middle pretium in the transaction process (-15.07) and a small pretium increase of 10.98 points. A common scale code has decreased the judicial litigation concerning casualties in spite of scales' limits. Only the patients with important aftermaths arrive in the judicial process since 1997.

  18. A regularity result for fixed points, with applications to linear response

    NASA Astrophysics Data System (ADS)

    Sedro, Julien

    2018-04-01

    In this paper, we show a series of abstract results on fixed point regularity with respect to a parameter. They are based on a Taylor development taking into account a loss of regularity phenomenon, typically occurring for composition operators acting on spaces of functions with finite regularity. We generalize this approach to higher order differentiability, through the notion of an n-graded family. We then give applications to the fixed point of a nonlinear map, and to linear response in the context of (uniformly) expanding dynamics (theorem 3 and corollary 2), in the spirit of Gouëzel-Liverani.

  19. Statistical total correlation spectroscopy scaling for enhancement of metabolic information recovery in biological NMR spectra.

    PubMed

    Maher, Anthony D; Fonville, Judith M; Coen, Muireann; Lindon, John C; Rae, Caroline D; Nicholson, Jeremy K

    2012-01-17

    The high level of complexity in nuclear magnetic resonance (NMR) metabolic spectroscopic data sets has fueled the development of experimental and mathematical techniques that enhance latent biomarker recovery and improve model interpretability. We previously showed that statistical total correlation spectroscopy (STOCSY) can be used to edit NMR spectra to remove drug metabolite signatures that obscure metabolic variation of diagnostic interest. Here, we extend this "STOCSY editing" concept to a generalized scaling procedure for NMR data that enhances recovery of latent biochemical information and improves biological classification and interpretation. We call this new procedure STOCSY-scaling (STOCSY(S)). STOCSY(S) exploits the fixed proportionality in a set of NMR spectra between resonances from the same molecule to suppress or enhance features correlated with a resonance of interest. We demonstrate this new approach using two exemplar data sets: (a) a streptozotocin rat model (n = 30) of type 1 diabetes and (b) a human epidemiological study utilizing plasma NMR spectra of patients with metabolic syndrome (n = 67). In both cases significant biomarker discovery improvement was observed by using STOCSY(S): the approach successfully suppressed interfering NMR signals from glucose and lactate that otherwise dominate the variation in the streptozotocin study, which then allowed recovery of biomarkers such as glycine, which were otherwise obscured. In the metabolic syndrome study, we used STOCSY(S) to enhance variation from the high-density lipoprotein cholesterol peak, improving the prediction of individuals with metabolic syndrome from controls in orthogonal projections to latent structures discriminant analysis models and facilitating the biological interpretation of the results. Thus, STOCSY(S) is a versatile technique that is applicable in any situation in which variation, either biological or otherwise, dominates a data set at the expense of more interesting or important features. This approach is generally appropriate for many types of NMR-based complex mixture analyses and hence for wider applications in bioanalytical science.

  20. Performance of the fixed-bed of granular activated carbon for the removal of pesticides from water supply.

    PubMed

    Alves, Alcione Aparecida de Almeida; Ruiz, Giselle Louise de Oliveira; Nonato, Thyara Campos Martins; Müller, Laura Cecilia; Sens, Maurício Luiz

    2018-02-26

    The application of a fixed bed adsorption column of granular activated carbon (FBAC-GAC), in the removal of carbaryl, methomyl and carbofuran at a concentration of 25 μg L -1 for each carbamate, from the public water supply was investigated. For the determination of the presence of pesticides in the water supply, the analytical technique of high-performance liquid chromatography with post-column derivatization was used. Under conditions of constant diffusivity, the FBAC-GAC was saturated after 196 h of operation on a pilot scale. The exhaust rate of the granular activated carbon (GAC) in the FBAC-GAC until the point of saturation was 0.02 kg GAC m -3 of treated water. By comparing a rapid small-scale column test and FBAC-GAC, it was confirmed that the predominant intraparticle diffusivity in the adsorption column was constant diffusivity. Based on the results obtained on a pilot scale, it was possible to estimate the values to be applied in the FBAC-GAC (full scale) to remove the pesticides, which are particle size with an average diameter of 1.5 mm GAC; relationship between the internal diameter of the column and the average diameter of GAC ≥50 in order to avoid preferential flow near the adsorption column wall; surface application rate 240 m 3  m -2  d -1 and an empty bed contact time of 3 min. BV: bed volume; CD: constant diffusivity; EBCT: empty bed contact time; FBAC-GAC: fixed bed adsorption column of granular activated carbon; GAC: granular activated carbon; MPV: maximum permitted values; NOM: natural organic matter; PD: proportional diffusivity; pH PCZ : pH of the zero charge point; SAR: surface application rate; RSSCT: rapid small-scale column test; WTCS: water treated conventional system.

  1. Simulation-based robust optimization for signal timing and setting.

    DOT National Transportation Integrated Search

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  2. Effect of different runway size on pilot performance during simulated night landing approaches.

    DOT National Transportation Integrated Search

    1981-02-01

    In Experiment I, three pilots flew simulated approaches and landings in a fixed-base simulator with a computer-generated-image visual display. Practice approaches were flown with an 8,000-ft-long runway that was either 75, 150, or 300 ft wide; test a...

  3. Development of feedback-speed-control system of fixed-abrasive tool for mat-surface fabrication

    NASA Astrophysics Data System (ADS)

    Yanagihara, K.; Kita, R.

    2018-01-01

    This study deals with the new method to fabricate a mat-surface by using fixed-abrasive tool. Mat-surface is a surface with microscopic irregularities whose dimensions are close to the wavelengths of visible light (400-700 nanometers). In order to develop the new method to fabricate mat-surface without pre-masking and large scale back up facility, utilization of fixed-abrasive tool is discussed. The discussion clarifies that abrasives in shot blasting are given kinetic energy along to only plunge-direction while excluding traverse-direction. If the relative motion between tool and work in fixed-abrasive process can be realized as that in blasting, mat-surface will be accomplished with fixed-abrasive process. To realize the proposed idea, new surface-fabrication system to which is adopted feedback-speed-control of abrasive wheel has been designed. The system consists of micro-computer unit (MPU), work-speed sensor, fixed-abrasive wheel, and wheel driving unit. The system can control relative speed between work and wheel in optimum range to produce mat-surface. Finally experiment to verify the developed system is carried out. The results of experiments show that the developed system is effective and it can produce the surface from grinding to mat-surface seamlessly.

  4. Examining the Earnings Trajectories of Community College Students Using a Piecewise Growth Curve Modeling Approach. A CAPSEE Working Paper

    ERIC Educational Resources Information Center

    Jaggars, Shanna Smith; Xu, Di

    2015-01-01

    Policymakers have become increasingly concerned with measuring--and holding colleges accountable for--students' labor market outcomes. In this paper we introduce a piecewise growth curve approach to analyzing community college students' labor market outcomes, and we discuss how this approach differs from Mincerian and fixed-effects approaches. Our…

  5. Multiphase flow modelling of explosive volcanic eruptions using adaptive unstructured meshes

    NASA Astrophysics Data System (ADS)

    Jacobs, Christian T.; Collins, Gareth S.; Piggott, Matthew D.; Kramer, Stephan C.

    2014-05-01

    Explosive volcanic eruptions generate highly energetic plumes of hot gas and ash particles that produce diagnostic deposits and pose an extreme environmental hazard. The formation, dispersion and collapse of these volcanic plumes are complex multiscale processes that are extremely challenging to simulate numerically. Accurate description of particle and droplet aggregation, movement and settling requires a model capable of capturing the dynamics on a range of scales (from cm to km) and a model that can correctly describe the important multiphase interactions that take place. However, even the most advanced models of eruption dynamics to date are restricted by the fixed mesh-based approaches that they employ. The research presented herein describes the development of a compressible multiphase flow model within Fluidity, a combined finite element / control volume computational fluid dynamics (CFD) code, for the study of explosive volcanic eruptions. Fluidity adopts a state-of-the-art adaptive unstructured mesh-based approach to discretise the domain and focus numerical resolution only in areas important to the dynamics, while decreasing resolution where it is not needed as a simulation progresses. This allows the accurate but economical representation of the flow dynamics throughout time, and potentially allows large multi-scale problems to become tractable in complex 3D domains. The multiphase flow model is verified with the method of manufactured solutions, and validated by simulating published gas-solid shock tube experiments and comparing the numerical results against pressure gauge data. The application of the model considers an idealised 7 km by 7 km domain in which the violent eruption of hot gas and volcanic ash high into the atmosphere is simulated. Although the simulations do not correspond to a particular eruption case study, the key flow features observed in a typical explosive eruption event are successfully captured. These include a shock wave resulting from the sudden high-velocity inflow of gas and ash; the formation of a particle-laden plume rising several hundred metres into the atmosphere; the eventual collapse of the plume which generates a volcanic ash fountain and a fast ground-hugging pyroclastic density current; and the growth of a dilute convective region that rises above the ash fountain as a result of buoyancy effects. The results from Fluidity are also compared with results from MFIX, a fixed structured mesh-based multiphase flow code, that uses the same set-up. The key flow features are also captured in MFIX, providing at least some confidence in the plausibility of the numerical results in the absence of quantitative field data. Finally, it is shown by a convergence analysis that Fluidity offers the same solution accuracy for reduced computational cost using an adaptive mesh, compared to the same simulation performed with a uniform fixed mesh.

  6. Using small-scale rainfall simulation to assess temporal changes in pre- and post-fire soil hydrology and erosion: the value of fixed-position plots

    NASA Astrophysics Data System (ADS)

    Ferreira, Carla S. S.; Shakesby, Rick A.; Bento, Célia P. M.; Walsh, Rory P. D.; Ferreira, António J. D.

    2013-04-01

    In recent decades, wildfire has become both frequent and severe in southern Europe leading to widespread research into its impacts on soil erosion, soil and water quality. Rainfall simulation has become established as a popular technique to assess these impacts, as it can be conducted under controlled conditions (notably, with respect to rainfall) and is a very cost-effective and rapid way to compare overland flow and suspended sediment generation within burned and unburned sites. Particular advantages are that: (1) results can be obtained before the first post-fire rainfall events; and (2) experiments can reproduce controlled storm events, with similar characteristics to natural rain. Although plot sizes vary (0.09-30m2), most researchers have used < 1m2 plots because of logistical difficulties of setting up larger plots especially in burned areas that may lack good access and local water supplies. Disadvantages with using small plots, however, particularly on burned terrain, include: (1) the difficulty of installing the plots without disturbing the soil; (2) the strong influence of plot boundaries on overland flow and sediment production. Significant replication is generally considered necessary to take account of high variability in results that are due in part to these effects. One response to these problems is a 'fixed plot' approach in which bounded plots are left in place for re-use throughout the study. A problem here, however, would be progressive sediment exhaustion due to the 'island' effect of the plots caused by their isolation from upslope sediment transfer. This paper assesses the usefulness of a repeat-simulation plot approach in assessing temporal change in overland flow and erosion in post-fire situations that minimizes the island effect by partial removal of plot boundaries between surveys. This approach was tested over a 2.5-year period in a small (9 ha) catchment in central Portugal subjected to an experimental fire in 2009. Five rainfall simulation plots 0.25m2 in size were installed close to sediment traps (contributing areas: 498-4238m2) collecting sediment eroded by overland flow caused by natural rainfall. The plots were installed pre-fire and experiments carried out under 'dry' and 'wet' antecedent conditions on six occasions from pre-fire to two years after the fire. The lateral boundaries of each plot were left in place, but the upslope boundary and central (outlet) section of the downslope boundary were removed between surveys and re-installed and sealed each time measurements were carried out. Having fixed positions of plots minimised soil disturbance on each monitoring occasion and meant that, for any given plot, results were directly comparable and gave a more reliable picture of change through time. Removing the upper and lower boundaries of the plots between measurements allowed the soil to undergo processes similar to those on the surrounding slope and reduced the 'island' effect associated with continuously bounded plots. Results from the adjacent sediment traps, which provided a parallel temporal record of hillslope-scale overland flow and sediment redistribution patterns under natural rainfall, are used to judge the usefulness of the in situ simulation plots approach.

  7. Effect of lift-to-drag ratio in pilot rating of the HL-20 landing task

    NASA Technical Reports Server (NTRS)

    Jackson, E. B.; Rivers, Robert A.; Bailey, Melvin L.

    1993-01-01

    A man-in-the-loop simulation study of the handling qualities of the HL-20 lifting-body vehicle was made in a fixed-base simulation cockpit at NASA Langley Research Center. The purpose of the study was to identify and substantiate opportunities for improving the original design of the vehicle from a handling qualities and landing performance perspective. Using preliminary wind-tunnel data, a subsonic aerodynamic model of the HL-20 was developed. This model was adequate to simulate the last 75-90 s of the approach and landing. A simple flight-control system was designed and implemented. Using this aerodynamic model as a baseline, visual approaches and landings were made at several vehicle lift-to-drag ratios. Pilots rated the handling characteristics of each configuration using a conventional numerical pilot-rating scale. Results from the study showed a high degree of correlation between the lift-to-drag ratio and pilot rating. Level 1 pilot ratings were obtained when the L/D ratio was approximately 3.8 or higher.

  8. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  9. Effect of lift-to-drag ratio in pilot rating of the HL-20 landing task

    NASA Astrophysics Data System (ADS)

    Jackson, E. B.; Rivers, Robert A.; Bailey, Melvin L.

    1993-10-01

    A man-in-the-loop simulation study of the handling qualities of the HL-20 lifting-body vehicle was made in a fixed-base simulation cockpit at NASA Langley Research Center. The purpose of the study was to identify and substantiate opportunities for improving the original design of the vehicle from a handling qualities and landing performance perspective. Using preliminary wind-tunnel data, a subsonic aerodynamic model of the HL-20 was developed. This model was adequate to simulate the last 75-90 s of the approach and landing. A simple flight-control system was designed and implemented. Using this aerodynamic model as a baseline, visual approaches and landings were made at several vehicle lift-to-drag ratios. Pilots rated the handling characteristics of each configuration using a conventional numerical pilot-rating scale. Results from the study showed a high degree of correlation between the lift-to-drag ratio and pilot rating. Level 1 pilot ratings were obtained when the L/D ratio was approximately 3.8 or higher.

  10. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  11. New approaches to optimization in aerospace conceptual design

    NASA Technical Reports Server (NTRS)

    Gage, Peter J.

    1995-01-01

    Aerospace design can be viewed as an optimization process, but conceptual studies are rarely performed using formal search algorithms. Three issues that restrict the success of automatic search are identified in this work. New approaches are introduced to address the integration of analyses and optimizers, to avoid the need for accurate gradient information and a smooth search space (required for calculus-based optimization), and to remove the restrictions imposed by fixed complexity problem formulations. (1) Optimization should be performed in a flexible environment. A quasi-procedural architecture is used to conveniently link analysis modules and automatically coordinate their execution. It efficiently controls a large-scale design tasks. (2) Genetic algorithms provide a search method for discontinuous or noisy domains. The utility of genetic optimization is demonstrated here, but parameter encodings and constraint-handling schemes must be carefully chosen to avoid premature convergence to suboptimal designs. The relationship between genetic and calculus-based methods is explored. (3) A variable-complexity genetic algorithm is created to permit flexible parameterization, so that the level of description can change during optimization. This new optimizer automatically discovers novel designs in structural and aerodynamic tasks.

  12. Estimating False Discovery Proportion Under Arbitrary Covariance Dependence*

    PubMed Central

    Fan, Jianqing; Han, Xu; Gu, Weijie

    2012-01-01

    Multiple hypothesis testing is a fundamental problem in high dimensional inference, with wide applications in many scientific fields. In genome-wide association studies, tens of thousands of tests are performed simultaneously to find if any SNPs are associated with some traits and those tests are correlated. When test statistics are correlated, false discovery control becomes very challenging under arbitrary dependence. In the current paper, we propose a novel method based on principal factor approximation, which successfully subtracts the common dependence and weakens significantly the correlation structure, to deal with an arbitrary dependence structure. We derive an approximate expression for false discovery proportion (FDP) in large scale multiple testing when a common threshold is used and provide a consistent estimate of realized FDP. This result has important applications in controlling FDR and FDP. Our estimate of realized FDP compares favorably with Efron (2007)’s approach, as demonstrated in the simulated examples. Our approach is further illustrated by some real data applications. We also propose a dependence-adjusted procedure, which is more powerful than the fixed threshold procedure. PMID:24729644

  13. Low-Cloud Feedbacks from Cloud-Controlling Factors: A Review

    DOE PAGES

    Klein, Stephen A.; Hall, Alex; Norris, Joel R.; ...

    2017-10-24

    Here, the response to warming of tropical low-level clouds including both marine stratocumulus and trade cumulus is a major source of uncertainty in projections of future climate. Climate model simulations of the response vary widely, reflecting the difficulty the models have in simulating these clouds. These inadequacies have led to alternative approaches to predict low-cloud feedbacks. Here, we review an observational approach that relies on the assumption that observed relationships between low clouds and the “cloud-controlling factors” of the large-scale environment are invariant across time-scales. With this assumption, and given predictions of how the cloud-controlling factors change with climate warming,more » one can predict low-cloud feedbacks without using any model simulation of low clouds. We discuss both fundamental and implementation issues with this approach and suggest steps that could reduce uncertainty in the predicted low-cloud feedback. Recent studies using this approach predict that the tropical low-cloud feedback is positive mainly due to the observation that reflection of solar radiation by low clouds decreases as temperature increases, holding all other cloud-controlling factors fixed. The positive feedback from temperature is partially offset by a negative feedback from the tendency for the inversion strength to increase in a warming world, with other cloud-controlling factors playing a smaller role. A consensus estimate from these studies for the contribution of tropical low clouds to the global mean cloud feedback is 0.25 ± 0.18 W m –2 K –1 (90% confidence interval), suggesting it is very unlikely that tropical low clouds reduce total global cloud feedback. Because the prediction of positive tropical low-cloud feedback with this approach is consistent with independent evidence from low-cloud feedback studies using high-resolution cloud models, progress is being made in reducing this key climate uncertainty.« less

  14. Low-Cloud Feedbacks from Cloud-Controlling Factors: A Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Stephen A.; Hall, Alex; Norris, Joel R.

    Here, the response to warming of tropical low-level clouds including both marine stratocumulus and trade cumulus is a major source of uncertainty in projections of future climate. Climate model simulations of the response vary widely, reflecting the difficulty the models have in simulating these clouds. These inadequacies have led to alternative approaches to predict low-cloud feedbacks. Here, we review an observational approach that relies on the assumption that observed relationships between low clouds and the “cloud-controlling factors” of the large-scale environment are invariant across time-scales. With this assumption, and given predictions of how the cloud-controlling factors change with climate warming,more » one can predict low-cloud feedbacks without using any model simulation of low clouds. We discuss both fundamental and implementation issues with this approach and suggest steps that could reduce uncertainty in the predicted low-cloud feedback. Recent studies using this approach predict that the tropical low-cloud feedback is positive mainly due to the observation that reflection of solar radiation by low clouds decreases as temperature increases, holding all other cloud-controlling factors fixed. The positive feedback from temperature is partially offset by a negative feedback from the tendency for the inversion strength to increase in a warming world, with other cloud-controlling factors playing a smaller role. A consensus estimate from these studies for the contribution of tropical low clouds to the global mean cloud feedback is 0.25 ± 0.18 W m –2 K –1 (90% confidence interval), suggesting it is very unlikely that tropical low clouds reduce total global cloud feedback. Because the prediction of positive tropical low-cloud feedback with this approach is consistent with independent evidence from low-cloud feedback studies using high-resolution cloud models, progress is being made in reducing this key climate uncertainty.« less

  15. Hybrid optical navigation by crater detection for lunar pin-point landing: trajectories from helicopter flight tests

    NASA Astrophysics Data System (ADS)

    Trigo, Guilherme F.; Maass, Bolko; Krüger, Hans; Theil, Stephan

    2018-01-01

    Accurate autonomous navigation capabilities are essential for future lunar robotic landing missions with a pin-point landing requirement, since in the absence of direct line of sight to ground control during critical approach and landing phases, or when facing long signal delays the herein before mentioned capability is needed to establish a guidance solution to reach the landing site reliably. This paper focuses on the processing and evaluation of data collected from flight tests that consisted of scaled descent scenarios where the unmanned helicopter of approximately 85 kg approached a landing site from altitudes of 50 m down to 1 m for a downrange distance of 200 m. Printed crater targets were distributed along the ground track and their detection provided earth-fixed measurements. The Crater Navigation (CNav) algorithm used to detect and match the crater targets is an unmodified method used for real lunar imagery. We analyze the absolute position and attitude solutions of CNav obtained and recorded during these flight tests, and investigate the attainable quality of vehicle pose estimation using both CNav and measurements from a tactical-grade inertial measurement unit. The navigation filter proposed for this end corrects and calibrates the high-rate inertial propagation with the less frequent crater navigation fixes through a closed-loop, loosely coupled hybrid setup. Finally, the attainable accuracy of the fused solution is evaluated by comparison with the on-board ground-truth solution of a dual-antenna high-grade GNSS receiver. It is shown that the CNav is an enabler for building autonomous navigation systems with high quality and suitability for exploration mission scenarios.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpinets, Tatiana V; Pelletier, Dale A; Pan, Chongle

    Understanding of cellular processes involved in the anaerobic degradation of complex organic compounds by microorganisms is crucial for development of innovative biotechnologies for bioethanol production and for efficient degradation of toxic organic compounds. In natural environment the degradation is usually accomplished by syntrophic consortia comprised of different bacterial species. Here we show that the metabolically versatile phototrophic bacterium Rhodopseudomonas palustris may form its own syntrophic consortia, when it grows anaerobically on p-coumarate or benzoate as a sole carbon source. In the study we reveal the consortia from a comparison of large-scale measurements of mRNA and protein expressions under p-coumarate andmore » benzoate degrading conditions using a novel computational approach referred as phenotype fingerprinting. In this approach marker genes for known R. palustris phenotypes are employed to calculate their expression from the gene and protein expressions in each studied condition. Subpopulations of the consortia are inferred from the expression of phenotypes and known metabolic modes of the R. palustris growth. We find that p-coumarate degrading condition leads to at least three R. palustris subpopulations utilizing p-coumarate, benzoate, and CO2 and H2. Benzoate degrading condition also produces at least three subpopulations utilizing benzoate, CO2 and H2, and N2 and formate. Communication among syntrophs and inter-syntrophic dynamics in each consortium are indicated by up-regulation of transporters and genes involved in the curli formation and chemotaxis. The photoautotrphic subpopulation found in both consortia is characterized by activation of two cbb operons and the uptake hydrogenase system. A specificity of N2-fixing subpopulation in the benzoate degrading consortium is the preferential activation of the vanadium nitrogenase over the molybdenum nitrogenase. The N2-fixing subpopulation in the consortium is confirmed by consumption of dissolved nitrogen gas under the benzoate degrading conditions.« less

  17. Intermittent fasting during Ramadan: does it affect sleep?

    PubMed

    Bahammam, Ahmed S; Almushailhi, Khalid; Pandi-Perumal, Seithikurippu R; Sharif, Munir M

    2014-02-01

    Islamic intermittent fasting is distinct from regular voluntary or experimental fasting. We hypothesised that if a regimen of a fixed sleep-wake schedule and a fixed caloric intake is followed during intermittent fasting, the effects of fasting on sleep architecture and daytime sleepiness will be minimal. Therefore, we designed this study to objectively assess the effects of Islamic intermittent fasting on sleep architecture and daytime sleepiness. Eight healthy volunteers reported to the Sleep Disorders Centre on five occasions for polysomnography and multiple sleep latency tests: (1) during adaptation; (2) 3 weeks before Ramadan, after having performed Islamic fasting for 1 week (baseline fasting); (3) 1 week before Ramadan (non-fasting baseline); (4) 2 weeks into Ramadan (Ramadan); and (5) 2 weeks after Ramadan (non-fasting; Recovery). Daytime sleepiness was assessed using the Epworth Sleepiness Scale and the multiple sleep latency test. The participants had a mean age of 26.6 ± 4.9 years, a body mass index of 23.7 ± 3.5 kg m(-2) and an Epworth Sleepiness Scale score of 7.3 ± 2.7. There was no change in weight or the Epworth Sleepiness Scale in the four study periods. The rapid eye movement sleep percentage was significantly lower during fasting. There was no difference in sleep latency, non-rapid eye movement sleep percentage, arousal index and sleep efficiency. The multiple sleep latency test analysis revealed no difference in the sleep latency between the 'non-fasting baseline', 'baseline fasting', 'Ramadan' and 'Recovery' time points. Under conditions of a fixed sleep-wake schedule and a fixed caloric intake, Islamic intermittent fasting results in decreased rapid eye movement sleep with no impact on other sleep stages, the arousal index or daytime sleepiness. © 2013 European Sleep Research Society.

  18. Optimal fixed-finite-dimensional compensator for Burgers' equation with unbounded input/output operators

    NASA Technical Reports Server (NTRS)

    Burns, John A.; Marrekchi, Hamadi

    1993-01-01

    The problem of using reduced order dynamic compensators to control a class of nonlinear parabolic distributed parameter systems was considered. Concentration was on a system with unbounded input and output operators governed by Burgers' equation. A linearized model was used to compute low-order-finite-dimensional control laws by minimizing certain energy functionals. Then these laws were applied to the nonlinear model. Standard approaches to this problem employ model/controller reduction techniques in conjunction with linear quadratic Gaussian (LQG) theory. The approach used is based on the finite dimensional Bernstein/Hyland optimal projection theory which yields a fixed-finite-order controller.

  19. Method of injection of onabotulinumtoxinA for chronic migraine: a safe, well-tolerated, and effective treatment paradigm based on the PREEMPT clinical program.

    PubMed

    Blumenfeld, Andrew; Silberstein, Stephen D; Dodick, David W; Aurora, Sheena K; Turkel, Catherine C; Binder, William J

    2010-10-01

    Chronic migraine (CM) is a prevalent and disabling neurological disorder. Few prophylactic treatments for CM have been investigated. OnabotulinumtoxinA, which inhibits the release of nociceptive mediators, such as glutamate, substance P, and calcitonin gene-related peptide, has been evaluated in randomized, placebo-controlled studies for the preventive treatment of a variety of headache disorders, including CM. These studies have yielded insight into appropriate patient selection, injection sites, dosages, and technique. Initial approaches used a set of fixed sites for the pericranial injections. However, the treatment approach evolved to include other sites that corresponded to the location of pain and tenderness in the individual patient in addition to the fixed sites. The Phase III REsearch Evaluating Migraine Prophylaxis Therapy (PREEMPT) injection paradigm uses both fixed and follow-the-pain sites, with additional specific follow-the-pain sites considered depending on individual symptoms. The PREEMPT paradigm for injecting onabotulinumtoxinA has been shown to be safe, well-tolerated, and effective in well-designed, controlled clinical trials and is the evidence-based approach recommended to optimize clinical outcomes for patients with CM. © 2010 American Headache Society.

  20. FixO3 : Early progress towards Open Ocean observatory Data Management Harmonisation

    NASA Astrophysics Data System (ADS)

    Pagnani, Maureen; Huber, Robert; Lampitt, Richard

    2014-05-01

    Since 2002 there has been a sustained effort, supported as European framework projects, to harmonise both the technology and the data management of Open Ocean fixed observatories run by European nations. FixO3 started in September 2013, and for 4 years will coordinate the convergence of data management best practice across a constellation of moorings in the Atlantic, in both hemispheres, and in the Mediterranean. To ensure the continued existence of these unique sources of oceanographic data as sustained observatories it is vital to improve access to the data collected, both in terms of methods of presentation, real-time availability, long-term archiving and quality assurance. The data management component of FixO3 will improve access to marine observatory data by harmonizing data management standards and workflows covering the complete life cycle of data from real time data acquisition to long-term archiving. Legal and data policy aspects will be examined to identify transnational barriers to open-access to marine observatory data. A harmonised FixO3 data policy is being synthesised from the partner's existing policies, which will overcome the identified barriers, and provide a formal basis for data exchange between FixO3 infrastructures. Presently, the interpretation and implementation of accepted standards has considerable incompatibilities within the observatory community, and these different approaches will be unified into the FixO3 approach. Further, FixO3 aims to harmonise data management and standardisation efforts with other European and international marine data and observatory infrastructures. The FixO3 synthesis will build on the standards established in other European infrastructures such as EDMONET, SEADATANET, PANGAEA, EuroSITES (European contribution to JCOMMP OceanSITES programme), and MyOcean (the Marine Core Service for GMES) infrastructures as well as relevant international infrastructures and data centres such as the ICOS Ocean Thematic Centre. The data management efforts are central to FixO3. Combined with the procedural and technological harmonisation, tackled in separate work packages, the FixO3 network of observatories will efficiently and cost effectively provide a consistent resource of quality controlled accessible oceanographic data The project website www.fixo3.eu is being developed as both a data showcase and single distribution point, and with database driven tools will enable the sharing of information between the observatories in the most smart and cost effective way. The network of knowledge built throughout the project will become a legacy resource that will ensure access to the unique ensemble data sets only achievable at these key observatories.

  1. Fixed Junction Light Emitting Electrochemical Cells based on Polymerizable Ionic Liquids

    NASA Astrophysics Data System (ADS)

    Brown, Erin; Limanek, Austin; Bauman, James; Leger, Janelle

    Organic photovoltaic (OPV) devices are of interest due to ease of fabrication, which increases their cost-effectiveness. OPV devices based on fixed-junction light emitting electrochemical cells (LECs) in particular have shown promising results. LECs are composed of a layer of polymer semiconductor blended with a salt sandwiched between two electrodes. As a forward bias is applied, the ions within the polymer separate, migrate to the electrodes, and enable electrochemical doping, thereby creating a p-n junction analog. In a fixed junction device, the ions are immobilized after the desired distribution has been established, allowing for operation under reverse bias conditions. Fixed junctions can be established using various techniques, including chemically by mixing polymerizable salts that will bond to the polymer under a forward bias. Previously we have demonstrated the use of the polymerizable ionic liquid allyltrioctylammonium allysulfonate (ATOAAS) as an effective means of creating a chemically fixed junction in an LEC. Here we present the application of this approach to the creation of photovoltaic devices. Devices demonstrate higher open circuit voltages, faster charging, and an overall improved device performance over previous chemically-fixed junction PV devices.

  2. Evaluation of a Treatment Approach Combining Nicotine Gum with Self-Guided Behavioral Treatments for Smoking Relapse Prevention.

    ERIC Educational Resources Information Center

    Killen, Joel D.; And Others

    1990-01-01

    Randomly assigned 1,218 smokers to cells in 4 (nicotine gum delivered ad lib, fixed regimen nicotine gum, placebo gum, no gum) x 3 (self-selected relapse prevention modules, randomly administered modules, no modules) design. Subjects receiving nicotine gum were more likely to be abstinent at 2- and 6-month followups. Fixed regimen accounted for…

  3. Multi-Action Planning for Threat Management: A Novel Approach for the Spatial Prioritization of Conservation Actions

    PubMed Central

    Cattarino, Lorenzo; Hermoso, Virgilio; Carwardine, Josie; Kennard, Mark J.; Linke, Simon

    2015-01-01

    Planning for the remediation of multiple threats is crucial to ensure the long term persistence of biodiversity. Limited conservation budgets require prioritizing which management actions to implement and where. Systematic conservation planning traditionally assumes that all the threats in priority sites are abated (fixed prioritization approach). However, abating only the threats affecting the species of conservation concerns may be more cost-effective. This requires prioritizing individual actions independently within the same site (independent prioritization approach), which has received limited attention so far. We developed an action prioritization algorithm that prioritizes multiple alternative actions within the same site. We used simulated annealing to find the combination of actions that remediate threats to species at the minimum cost. Our algorithm also accounts for the importance of selecting actions in sites connected through the river network (i.e., connectivity). We applied our algorithm to prioritize actions to address threats to freshwater fish species in the Mitchell River catchment, northern Australia. We compared how the efficiency of the independent and fixed prioritization approach varied as the importance of connectivity increased. Our independent prioritization approach delivered more efficient solutions than the fixed prioritization approach, particularly when the importance of achieving connectivity was high. By spatially prioritizing the specific actions necessary to remediate the threats affecting the target species, our approach can aid cost-effective habitat restoration and land-use planning. It is also particularly suited to solving resource allocation problems, where consideration of spatial design is important, such as prioritizing conservation efforts for highly mobile species, species facing climate change-driven range shifts, or minimizing the risk of threats spreading across different realms. PMID:26020794

  4. Multi-action planning for threat management: a novel approach for the spatial prioritization of conservation actions.

    PubMed

    Cattarino, Lorenzo; Hermoso, Virgilio; Carwardine, Josie; Kennard, Mark J; Linke, Simon

    2015-01-01

    Planning for the remediation of multiple threats is crucial to ensure the long term persistence of biodiversity. Limited conservation budgets require prioritizing which management actions to implement and where. Systematic conservation planning traditionally assumes that all the threats in priority sites are abated (fixed prioritization approach). However, abating only the threats affecting the species of conservation concerns may be more cost-effective. This requires prioritizing individual actions independently within the same site (independent prioritization approach), which has received limited attention so far. We developed an action prioritization algorithm that prioritizes multiple alternative actions within the same site. We used simulated annealing to find the combination of actions that remediate threats to species at the minimum cost. Our algorithm also accounts for the importance of selecting actions in sites connected through the river network (i.e., connectivity). We applied our algorithm to prioritize actions to address threats to freshwater fish species in the Mitchell River catchment, northern Australia. We compared how the efficiency of the independent and fixed prioritization approach varied as the importance of connectivity increased. Our independent prioritization approach delivered more efficient solutions than the fixed prioritization approach, particularly when the importance of achieving connectivity was high. By spatially prioritizing the specific actions necessary to remediate the threats affecting the target species, our approach can aid cost-effective habitat restoration and land-use planning. It is also particularly suited to solving resource allocation problems, where consideration of spatial design is important, such as prioritizing conservation efforts for highly mobile species, species facing climate change-driven range shifts, or minimizing the risk of threats spreading across different realms.

  5. Assessing opportunities for physical activity in the built environment of children: interrelation between kernel density and neighborhood scale.

    PubMed

    Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris

    2015-12-22

    Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to <6 years) and school children (6-9.9 years), and were adjusted for age, body mass index (BMI), education and safety concerns of parents, season and valid weartime of accelerometers. Association between intensity measures and MVPA strongly differed by network-distance, with stronger effects found for larger network-distances. Simple intensity revealed smaller effect estimates and smaller goodness-of-fit compared to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial scales and kernel intensity methods might reduce methodological limitations in assessing opportunities for PA in the built environment.

  6. Non-Kondo many-body physics in a Majorana-based Kondo type system

    NASA Astrophysics Data System (ADS)

    van Beek, Ian J.; Braunecker, Bernd

    2016-09-01

    We carry out a theoretical analysis of a prototypical Majorana system, which demonstrates the existence of a Majorana-mediated many-body state and an associated intermediate low-energy fixed point. Starting from two Majorana bound states, hosted by a Coulomb-blockaded topological superconductor and each coupled to a separate lead, we derive an effective low-energy Hamiltonian, which displays a Kondo-like character. However, in contrast to the Kondo model which tends to a strong- or weak-coupling limit under renormalization, we show that this effective Hamiltonian scales to an intermediate fixed point, whose existence is contingent upon teleportation via the Majorana modes. We conclude by determining experimental signatures of this fixed point, as well as the exotic many-body state associated with it.

  7. Electrophoretic cell separation by means of immunomicrospheres

    NASA Technical Reports Server (NTRS)

    Rembaum, A.; Smolka, A. J. K.

    1980-01-01

    The electrophoretic mobility of fixed human red blood cells immunologically labeled with polymeric (4-vinyl)pyridine or polyglutaraldehyde microspheres was altered to a considerable extent. This observation was utilized in the preparative scale electrophoretic separation of human and turkey fixed red blood cells, whose mobilities under normal physiological conditions do not differ sufficiently to allow their separation by continuous flow electrophoresis. It is suggested that resolution in the electrophoretic separation of cell subpopulations, currently limited by finite and often overlapping mobility distributions, may be significantly enhanced by immuno-specific labeling of target populations using microspheres.

  8. Psychometric Functioning of the MMPI-2-RF VRIN-r and TRIN-r Scales with Varying Degrees of Randomness, Acquiescence, and Counter-Acquiescence

    ERIC Educational Resources Information Center

    Handel, Richard W.; Ben-Porath, Yossef S.; Tellegen, Auke; Archer, Robert P.

    2010-01-01

    In the present study, the authors evaluated the effects of increasing degrees of simulated non-content-based (random or fixed) responding on scores on the newly developed Variable Response Inconsistency-Revised (VRIN-r) and True Response Inconsistency-Revised (TRIN-r) scales of the Minnesota Multiphasic Personality Inventory-2 Restructured Form…

  9. Reducing motion artifacts for long-term clinical NIRS monitoring using collodion-fixed prism-based optical fibers

    PubMed Central

    Yücel, Meryem A.; Selb, Juliette; Boas, David A.; Cash, Sydney S.; Cooper, Robert J.

    2013-01-01

    As the applications of near-infrared spectroscopy (NIRS) continue to broaden and long-term clinical monitoring becomes more common, minimizing signal artifacts due to patient movement becomes more pressing. This is particularly true in applications where clinically and physiologically interesting events are intrinsically linked to patient movement, as is the case in the study of epileptic seizures. In this study, we apply an approach common in the application of EEG electrodes to the application of specialized NIRS optical fibers. The method provides improved optode-scalp coupling through the use of miniaturized optical fiber tips fixed to the scalp using collodion, a clinical adhesive. We investigate and quantify the performance of this new method in minimizing motion artifacts in healthy subjects, and apply the technique to allow continuous NIRS monitoring throughout epileptic seizures in two epileptic in-patients. Using collodion-fixed fibers reduces the percent signal change of motion artifacts by 90 % and increases the SNR by 6 and 3 fold at 690 and 830 nm wavelengths respectively when compared to a standard Velcro-based array of optical fibers. The change in both HbO and HbR during motion artifacts is found to be statistically lower for the collodion-fixed fiber probe. The collodion-fixed optical fiber approach has also allowed us to obtain good quality NIRS recording of three epileptic seizures in two patients despite excessive motion in each case. PMID:23796546

  10. Mesofluidic two stage digital valve

    DOEpatents

    Jansen, John F; Love, Lonnie J; Lind, Randall F; Richardson, Bradley S

    2013-12-31

    A mesofluidic scale digital valve system includes a first mesofluidic scale valve having a valve body including a bore, wherein the valve body is configured to cooperate with a solenoid disposed substantially adjacent to the valve body to translate a poppet carried within the bore. The mesofluidic scale digital valve system also includes a second mesofluidic scale valve disposed substantially perpendicular to the first mesofluidic scale valve. The mesofluidic scale digital valve system further includes a control element in communication with the solenoid, wherein the control element is configured to maintain the solenoid in an energized state for a fixed period of time to provide a desired flow rate through an orifice of the second mesofluidic valve.

  11. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  12. Straight scaling FFAG beam line

    NASA Astrophysics Data System (ADS)

    Lagrange, J.-B.; Planche, T.; Yamakawa, E.; Uesugi, T.; Ishi, Y.; Kuriyama, Y.; Qin, B.; Okabe, K.; Mori, Y.

    2012-11-01

    Fixed field alternating gradient (FFAG) accelerators are recently subject to a strong revival. They are usually designed in a circular shape; however, it would be an asset to guide particles with no overall bend in this type of accelerator. An analytical development of a straight FFAG cell which keeps zero-chromaticity is presented here. A magnetic field law is thus obtained, called "straight scaling law", and an experiment has been conducted to confirm this zero-chromatic law. A straight scaling FFAG prototype has been designed and manufactured, and horizontal phase advances of two different energies are measured. Results are analyzed to clarify the straight scaling law.

  13. Materials and Structures Research for Gas Turbine Applications Within the NASA Subsonic Fixed Wing Project

    NASA Technical Reports Server (NTRS)

    Hurst, Janet

    2011-01-01

    A brief overview is presented of the current materials and structures research geared toward propulsion applications for NASA s Subsonic Fixed Wing Project one of four projects within the Fundamental Aeronautics Program of the NASA Aeronautics Research Mission Directorate. The Subsonic Fixed Wing (SFW) Project has selected challenging goals which anticipate an increasing emphasis on aviation s impact upon the global issue of environmental responsibility. These goals are greatly reduced noise, reduced emissions and reduced fuel consumption and address 25 to 30 years of technology development. Successful implementation of these demanding goals will require development of new materials and structural approaches within gas turbine propulsion technology. The Materials and Structures discipline, within the SFW project, comprise cross-cutting technologies ranging from basic investigations to component validation in laboratory environments. Material advances are teamed with innovative designs in a multidisciplinary approach with the resulting technology advances directed to promote the goals of reduced noise and emissions along with improved performance.

  14. Time-Frequency Analysis of Rocket Nozzle Wall Pressures During Start-up Transients

    NASA Technical Reports Server (NTRS)

    Baars, Woutijn J.; Tinney, Charles E.; Ruf, Joseph H.

    2011-01-01

    Surveys of the fluctuating wall pressure were conducted on a sub-scale, thrust- optimized parabolic nozzle in order to develop a physical intuition for its Fourier-azimuthal mode behavior during fixed and transient start-up conditions. These unsteady signatures are driven by shock wave turbulent boundary layer interactions which depend on the nozzle pressure ratio and nozzle geometry. The focus however, is on the degree of similarity between the spectral footprints of these modes obtained from transient start-ups as opposed to a sequence of fixed nozzle pressure ratio conditions. For the latter, statistically converged spectra are computed using conventional Fourier analyses techniques, whereas the former are investigated by way of time-frequency analysis. The findings suggest that at low nozzle pressure ratios -- where the flow resides in a Free Shock Separation state -- strong spectral similarities occur between fixed and transient conditions. Conversely, at higher nozzle pressure ratios -- where the flow resides in Restricted Shock Separation -- stark differences are observed between the fixed and transient conditions and depends greatly on the ramping rate of the transient period. And so, it appears that an understanding of the dynamics during transient start-up conditions cannot be furnished by a way of fixed flow analysis.

  15. Coastal evolution and littoral cells distribution in Northern Tuscany (Italy)

    NASA Astrophysics Data System (ADS)

    Anfuso, Giorgio; Pranzini, Enzo; Vitale, Giovanni

    2010-05-01

    This paper deals with a 64-km-long coastal physiographic unit located in the northern littoral of Tuscany (Italy). The investigated area recorded important erosion problems in last century due to the reduction in sediment input from rivers and to the feeding effect of ports and shore protection structures. Vertical aerial photographs and direct field surveys (with RTK-GPS and total station) were used for the reconstruction of coastline changes at medium-long temporal scales. The littoral is a microtidal environment and most frequent and severe storms approach from the 245° direction, with maximum one year recurrence Hs values between 3.5 and 4.0 m, less frequent and severe storms approach from the 180° and 200° directions. Concerning coastal evolution for the 1938-2005 period, important accretion was recorded updrift of two harbours (300 at Viareggio and 100 m at Carrara port in a convergence area (100 m at Marina di Pietrasanta), whereas severe erosion occurred downcoast of Carrara harbour (-130 m at Marina dei Ronchi) and at the northern (unprotected) side of the Arno River mouth (with maximum values of 400 m). Locally breakwaters and groins were implemented to solve erosion problems but the structures only - and not always - solved problems at local scale shifting erosion downdrift. Coastal compartmentalisation controlled the longshore distribution of erosion/accretion patterns and it was strongly forced by natural and human structures and coastal orientation in relation to wave approaching fronts. Three main littoral cells were formed by four natural limits: i) Punta Bianca Promontory, which works as a fixed absolute limit; ii) Marina di Pietrasanta, a convergent, free limit; iii) the Arno River Mouth, a divergent limit; and, iv) Livorno harbour, which works as an absolute fixed southern limit. In it is important to highlight that human structures interfere with natural sediment transport within major cells creating small sub-cells. This way, the general natural trend determined by coastal compartmentalisation is only slightly affected by human structures which give rise to erosion/accretion areas within most important cells. In detail, the most important structures are Carrara and Viareggio ports which constitute artificial, fixed limits which allow little transport in a given direction, depending on their protrusion and wave characteristics. They allow periodic, almost unidirectional, transport that, according to field observations, takes place along narrow zones parallel to the shoreline, extending to a variable depth (6-10 m), depending on wave conditions and bottom morphology. Furthermore, bypassing of limits takes place locally as a consequence of bed load sand transport onto longshore bars and only fine sediments bypass the structures. In detail, Carrara port only permits transport in one predominant direction (southward) and Viareggio port probably records a bi-directional transport, even if prevails the northward directed one. Last, obtained results are useful to improve the understanding of coastal processes to manage littoral sediment transport in a sustainable manner and to minimise needs for structural interventions. For this is sufficient to identify independent cells and partially dependent sub-cells for shoreline management units, if not adverse impacts will be inevitability transmitted to the downdrift unit.

  16. A new reference frame for astronomically-tuned Plio-Pleistocene climate variability derived from a benthic oxygen isotope splice of the Mediterranean

    NASA Astrophysics Data System (ADS)

    Lourens, L. J.; Ziegler, M.; Konijnendijk, T. Y. M.; Hilgen, F. J.; Bos, R.; Beekvelt, B.; van Loevezijn, A.; Collin, S.

    2017-12-01

    The astronomical theory of climate has revolutionized our understanding of past climate change and the development of highly accurate geologic time scales for the entire Cenozoic. Most of this understanding has come from the construction of astronomically tuned global ocean benthic foraminiferal oxygen isotope (δ18O) stacked record, derived by the international drilling operations of DSDP, ODP and IODP. The tuning includes fixed phase relationships between the obliquity and precession cycles and the inferred high-latitude climate, i.e. glacial-interglacial, response, which hark back to SPECMAP, using simple ice sheet models and a limited number of radiometric dates. This approach was largely implemented in the widely applied LR04 stack, though LR04 assumed shorter response times for the smaller ice caps during the Pliocene. In the past decades, an astronomically calibrated time scale for the Pliocene and Pleistocene of the Mediterranean has been developed, which has become the reference for the standard Geologic Time Scale. Typical of the Mediterranean marine sediments are the cyclic lithological alternations, reflecting the interference between obliquity and precession-paced low latitude climate variability, such as the African monsoon. Here we present the first benthic foraminiferal based oxygen isotope record of the Mediterranean reference scale, which strikingly mirrors the LR04. We will use this record to discuss the assumed open ocean glacial-interglacial related phase relations over the past 5.3 million years.

  17. Inverse size scaling of the nucleolus by a concentration-dependent phase transition.

    PubMed

    Weber, Stephanie C; Brangwynne, Clifford P

    2015-03-02

    Just as organ size typically increases with body size, the size of intracellular structures changes as cells grow and divide. Indeed, many organelles, such as the nucleus [1, 2], mitochondria [3], mitotic spindle [4, 5], and centrosome [6], exhibit size scaling, a phenomenon in which organelle size depends linearly on cell size. However, the mechanisms of organelle size scaling remain unclear. Here, we show that the size of the nucleolus, a membraneless organelle important for cell-size homeostasis [7], is coupled to cell size by an intracellular phase transition. We find that nucleolar size directly scales with cell size in early C. elegans embryos. Surprisingly, however, when embryo size is altered, we observe inverse scaling: nucleolar size increases in small cells and decreases in large cells. We demonstrate that this seemingly contradictory result arises from maternal loading of a fixed number rather than a fixed concentration of nucleolar components, which condense into nucleoli only above a threshold concentration. Our results suggest that the physics of phase transitions can dictate whether an organelle assembles, and, if so, its size, providing a mechanistic link between organelle assembly and cell size. Since the nucleolus is known to play a key role in cell growth, this biophysical readout of cell size could provide a novel feedback mechanism for growth control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. A new family of distribution functions for spherical galaxies

    NASA Astrophysics Data System (ADS)

    Gerhard, Ortwin E.

    1991-06-01

    The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.

  19. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent.

    PubMed

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-07

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  20. Scaling behavior of knotted random polygons and self-avoiding polygons: Topological swelling with enhanced exponent

    NASA Astrophysics Data System (ADS)

    Uehara, Erica; Deguchi, Tetsuo

    2017-12-01

    We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.

  1. The Higgs transverse momentum distribution at NNLL and its theoretical errors

    DOE PAGES

    Neill, Duff; Rothstein, Ira Z.; Vaidya, Varun

    2015-12-15

    In this letter, we present the NNLL-NNLO transverse momentum Higgs distribution arising from gluon fusion. In the regime p ⊥ << m h we include the resummation of the large logs at next to next-to leading order and then match on to the α 2 s fixed order result near p ⊥~m h. By utilizing the rapidity renormalization group (RRG) we are able to smoothly match between the resummed, small p ⊥ regime and the fixed order regime. We give a detailed discussion of the scale dependence of the result including an analysis of the rapidity scale dependence. Our centralmore » value differs from previous results, in the transition region as well as the tail, by an amount which is outside the error band. Lastly, this difference is due to the fact that the RRG profile allows us to smoothly turn off the resummation.« less

  2. Implant-retained dentures for full-arch rehabilitation: a case report comparing fixed and removable restorations.

    PubMed

    Zafiropoulos, Gregory-George; Hoffman, Oliver

    2011-01-01

    Dental implants as abutments for full-arch restorations are a well-documented treatment modality. This report presents a case in which the patient was treated initially with fixed restorations supported by either implants or natural teeth and subsequently treated with a removable implant/telescopic crown-supported overdenture. Advantages and disadvantages of each approach are described and discussed. While the fixed restoration resulted in a functionally satisfactory treatment outcome, the patient was displeased with the esthetic appearance. The main concern was the unnaturally long tooth shape necessary to compensate for the insufficient alveolar ridge height. Replacement of the existing restoration with an implant-supported removable overdenture led to a functionally and esthetically acceptable result. When deciding whether to use a fixed or removable implant-supported full-arch restoration, a multitude of factors must be considered. Due to the possible need for additional surgical steps to enhance the esthetic appearance surrounding fixed restorations, removable implant-supported partial dentures often are the better choice.

  3. Trenonacog alfa for prophylaxis, on-demand and perioperative management of hemophilia B.

    PubMed

    Brennan, Yvonne; Curnow, Jennifer; Favaloro, Emmanuel J

    2018-01-01

    Current treatment for hemophilia B involves replacing the missing coagulation factor IX (FIX) with either plasma-derived or recombinant (r) FIX. Trenonacog alfa is the third normal half-life rFIX that has been granted FDA approval. Area covered: In this review, the authors examine trenonacog alfa for the treatment of hemophilia B including prophylaxis, on-demand and perioperative hemostasis. They compare the PK profile to nonacog alfa and evaluate the drug's efficacy and safety from published studies. Expert opinion: Trenonacog alfa appears to be an effective and safe treatment option for patients with hemophilia B with a PK profile similar to that of nonacog alfa. Despite the advent of extended half-life rFIX and other novel therapeutic approaches, normal half-life rFIX products, including trenonacog alfa, are likely to continue to have a place in hemophilia B treatment for at least the immediate future while the new landscape takes shape, particularly in countries that cannot afford the newer treatments.

  4. Genetic modification of bone-marrow mesenchymal stem cells and hematopoietic cells with human coagulation factor IX-expressing plasmids.

    PubMed

    Sam, Mohammad Reza; Azadbakhsh, Azadeh Sadat; Farokhi, Farrah; Rezazadeh, Kobra; Sam, Sohrab; Zomorodipour, Alireza; Haddad-Mashadrizeh, Aliakbar; Delirezh, Nowruz; Mokarizadeh, Aram

    2016-05-01

    Ex-vivo gene therapy of hemophilias requires suitable bioreactors for secretion of hFIX into the circulation and stem cells hold great potentials in this regard. Viral vectors are widely manipulated and used to transfer hFIX gene into stem cells. However, little attention has been paid to the manipulation of hFIX transgene itself. Concurrently, the efficacy of such a therapeutic approach depends on determination of which vectors give maximal transgene expression. With this in mind, TF-1 (primary hematopoietic lineage) and rat-bone marrow mesenchymal stem cells (BMSCs) were transfected with five hFIX-expressing plasmids containing different combinations of two human β-globin (hBG) introns inside the hFIX-cDNA and Kozak element and hFIX expression was evaluated by different methods. In BMSCs and TF-1 cells, the highest hFIX level was obtained from the intron-less and hBG intron-I,II containing plasmids respectively. The highest hFIX activity was obtained from the cells that carrying the hBG intron-I,II containing plasmids. BMSCs were able to produce higher hFIX by 1.4 to 4.7-fold increase with activity by 2.4 to 4.4-fold increase compared to TF-1 cells transfected with the same constructs. BMSCs and TF-1 cells could be effectively bioengineered without the use of viral vectors and hFIX minigene containing hBG introns could represent a particular interest in stem cell-based gene therapy of hemophilias. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  5. Enhanced nearfield acoustic holography for larger distances of reconstructions using fixed parameter Tikhonov regularization

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.

    2016-07-07

    This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.

  6. Weather Impact on Airport Arrival Meter Fix Throughput

    NASA Technical Reports Server (NTRS)

    Wang, Yao

    2017-01-01

    Time-based flow management provides arrival aircraft schedules based on arrival airport conditions, airport capacity, required spacing, and weather conditions. In order to meet a scheduled time at which arrival aircraft can cross an airport arrival meter fix prior to entering the airport terminal airspace, air traffic controllers make regulations on air traffic. Severe weather may create an airport arrival bottleneck if one or more of airport arrival meter fixes are partially or completely blocked by the weather and the arrival demand has not been reduced accordingly. Under these conditions, aircraft are frequently being put in holding patterns until they can be rerouted. A model that predicts the weather impacted meter fix throughput may help air traffic controllers direct arrival flows into the airport more efficiently, minimizing arrival meter fix congestion. This paper presents an analysis of air traffic flows across arrival meter fixes at the Newark Liberty International Airport (EWR). Several scenarios of weather impacted EWR arrival fix flows are described. Furthermore, multiple linear regression and regression tree ensemble learning approaches for translating multiple sector Weather Impacted Traffic Indexes (WITI) to EWR arrival meter fix throughputs are examined. These weather translation models are developed and validated using the EWR arrival flight and weather data for the period of April-September in 2014. This study also compares the performance of the regression tree ensemble with traditional multiple linear regression models for estimating the weather impacted throughputs at each of the EWR arrival meter fixes. For all meter fixes investigated, the results from the regression tree ensemble weather translation models show a stronger correlation between model outputs and observed meter fix throughputs than that produced from multiple linear regression method.

  7. Fluorine-fixing efficiency on calcium-based briquette: pilot experiment, demonstration and promotion.

    PubMed

    Yang, Jiao-lan; Chen, Dong-qing; Li, Shu-min; Yue, Yin-ling; Jin, Xin; Zhao, Bing-cheng; Ying, Bo

    2010-02-05

    The fluorosis derived from coal burning is a very serious problem in China. By using fluorine-fixing technology during coal burning we are able to reduce the release of fluorides in coal at the source in order to reduce pollution to the surrounding environment by coal burning pollutants as well as decrease the intake and accumulating amounts of fluorine in the human body. The aim of this study was to conduct a pilot experiment on calcium-based fluorine-fixing material efficiency during coal burning to demonstrate and promote the technology based on laboratory research. A proper amount of calcium-based fluorine sorbent was added into high-fluorine coal to form briquettes so that the fluorine in high-fluorine coal can be fixed in coal slag and its release into atmosphere reduced. We determined figures on various components in briquettes and fluorine in coal slag as well as the concentrations of indoor air pollutants, including fluoride, sulfur dioxide and respirable particulate matter (RPM), and evaluated the fluorine-fixing efficiency of calcium-based fluorine sorbents and the levels of indoor air pollutants. Pilot experiments on fluorine-fixing efficiency during coal burning as well as its demonstration and promotion were carried out separately in Guiding and Longli Counties of Guizhou Province, two areas with coal burning fluorosis problems. If the calcium-based fluorine sorbent mixed coal was made into honeycomb briquettes the average fluorine-fixing ratio in the pilot experiment was 71.8%. If the burning calcium-based fluorine-fixing bitumite was made into a coalball, the average of fluorine-fixing ratio was 77.3%. The concentration of fluoride, sulfur dioxide and PM10 of indoor air were decreased significantly. There was a 10% increase in the cost of briquettes due to the addition of calcium-based fluorine sorbent. The preparation process of calcium-based fluorine-fixing briquette is simple yet highly flammable and it is applicable to regions with abundant bitumite coal. As a small scale application, villagers may make fluorine-fixing coalballs or briquettes by themselves, achieving the optimum fluorine-fixing efficiency and reducing indoor air pollutants providing environmental and social benefits.

  8. Choice with frequently changing food rates and food ratios.

    PubMed

    Baum, William M; Davison, Michael

    2014-03-01

    In studies of operant choice, when one schedule of a concurrent pair is varied while the other is held constant, the constancy of the constant schedule may exert discriminative control over performance. In our earlier experiments, schedules varied reciprocally across components within sessions, so that while food ratio varied food rate remained constant. In the present experiment, we held one variable-interval (VI) schedule constant while varying the concurrent VI schedule within sessions. We studied five conditions, each with a different constant left VI schedule. On the right key, seven different VI schedules were presented in seven different unsignaled components. We analyzed performances at several different time scales. At the longest time scale, across conditions, behavior ratios varied with food ratios as would be expected from the generalized matching law. At shorter time scales, effects due to holding the left VI constant became more and more apparent, the shorter the time scale. In choice relations across components, preference for the left key leveled off as the right key became leaner. Interfood choice approximated strict matching for the varied right key, whereas interfood choice hardly varied at all for the constant left key. At the shortest time scale, visit patterns differed for the left and right keys. Much evidence indicated the development of a fix-and-sample pattern. In sum, the procedural difference made a large difference to performance, except for choice at the longest time scale and the fix-and-sample pattern at the shortest time scale. © Society for the Experimental Analysis of Behavior.

  9. A new approach to blind deconvolution of astronomical images

    NASA Astrophysics Data System (ADS)

    Vorontsov, S. V.; Jefferies, S. M.

    2017-05-01

    We readdress the strategy of finding approximate regularized solutions to the blind deconvolution problem, when both the object and the point-spread function (PSF) have finite support. Our approach consists in addressing fixed points of an iteration in which both the object x and the PSF y are approximated in an alternating manner, discarding the previous approximation for x when updating x (similarly for y), and considering the resultant fixed points as candidates for a sensible solution. Alternating approximations are performed by truncated iterative least-squares descents. The number of descents in the object- and in the PSF-space play a role of two regularization parameters. Selection of appropriate fixed points (which may not be unique) is performed by relaxing the regularization gradually, using the previous fixed point as an initial guess for finding the next one, which brings an approximation of better spatial resolution. We report the results of artificial experiments with noise-free data, targeted at examining the potential capability of the technique to deconvolve images of high complexity. We also show the results obtained with two sets of satellite images acquired using ground-based telescopes with and without adaptive optics compensation. The new approach brings much better results when compared with an alternating minimization technique based on positivity-constrained conjugate gradients, where the iterations stagnate when addressing data of high complexity. In the alternating-approximation step, we examine the performance of three different non-blind iterative deconvolution algorithms. The best results are provided by the non-negativity-constrained successive over-relaxation technique (+SOR) supplemented with an adaptive scheduling of the relaxation parameter. Results of comparable quality are obtained with steepest descents modified by imposing the non-negativity constraint, at the expense of higher numerical costs. The Richardson-Lucy (or expectation-maximization) algorithm fails to locate stable fixed points in our experiments, due apparently to inappropriate regularization properties.

  10. Tactical Infrasound

    DTIC Science & Technology

    2005-05-01

    received briefings on a variety of infra - sonic sensor systenis. Materials were also received from the 2001 and 2002 Infrasonic Technology Workshops and...Systems to Tactical Acoustic Sys- tems One issue to be considered in the evaluation of a p)otential tactical infra - sonic system is the ability to...Communication range Fixed Fixed 5 km 7.4 A Design Approach for a Future Tactical Infra - sonic Sensor System This section describes a procedure used to

  11. A provisional fixed partial denture that simulates gingival tissue at the pontic-site defect.

    PubMed

    Haj-Ali, Reem; Walker, Mary P

    2002-03-01

    A technique is presented for the fabrication of an esthetic, provisional fixed partial denture that compensates for a pontic-site ridge defect. This provisional restoration enables both the dentist and the patient to evaluate whether this prosthetic approach will adequately camouflage the pontic-site defect or whether surgical correction of the pontic site should also be considered. Copyright 2002 by The American College of Prosthodontists.

  12. Enabling Technologies for Unified Life-Cycle Engineering of Structural Components

    DTIC Science & Technology

    1991-03-22

    representations for entities in the ULCE system for unambiguous, reliable, and efficient retrieval, manipulation, and transfer of data. Develop a rapid analysis...approaches to these functions. It is reasonable to assume that program budgets for future systems will be more restrictive and that fixed- price contracting...enemy threats, economics, and politics. The requirements are voluminous and may stipulate firm fixed- price proposals with detailed schedules. At this

  13. A CAD/CAM Zirconium Bar as a Bonded Mandibular Fixed Retainer: A Novel Approach with Two-Year Follow-Up.

    PubMed

    Zreaqat, Maen; Hassan, Rozita; Hanoun, Abdul Fatah

    2017-01-01

    Stainless steel alloys containing 8% to 12% nickel and 17% to 22% chromium are generally used in orthodontic appliances. A major concern has been the performance of alloys in the environment in which they are intended to function in the oral cavity. Biodegradation and metal release increase the risk of hypersensitivity and cytotoxicity. This case report describes for the first time a CAD/CAM zirconium bar as a bonded mandibular fixed retainer with 2-year follow-up in a patient who is subjected to long-term treatment with fixed orthodontic appliance and suspected to have metal hypersensitivity as shown by the considerable increase of nickel and chromium concentrations in a sample of patient's unstimulated saliva. The CAD/CAM design included a 1.8 mm thickness bar on the lingual surface of lower teeth from canine to canine with occlusal rests on mesial side of first premolars. For better retention, a thin layer of feldspathic ceramic was added to the inner surface of the bar and cemented with two dual-cured cement types. The patient's complaint subsided 6 weeks after cementation. Clinical evaluation appeared to give good functional value where the marginal fit of digitized CAD/CAM design and glazed surface offered an enhanced approach of fixed retention.

  14. A CAD/CAM Zirconium Bar as a Bonded Mandibular Fixed Retainer: A Novel Approach with Two-Year Follow-Up

    PubMed Central

    Hassan, Rozita; Hanoun, Abdul Fatah

    2017-01-01

    Stainless steel alloys containing 8% to 12% nickel and 17% to 22% chromium are generally used in orthodontic appliances. A major concern has been the performance of alloys in the environment in which they are intended to function in the oral cavity. Biodegradation and metal release increase the risk of hypersensitivity and cytotoxicity. This case report describes for the first time a CAD/CAM zirconium bar as a bonded mandibular fixed retainer with 2-year follow-up in a patient who is subjected to long-term treatment with fixed orthodontic appliance and suspected to have metal hypersensitivity as shown by the considerable increase of nickel and chromium concentrations in a sample of patient's unstimulated saliva. The CAD/CAM design included a 1.8 mm thickness bar on the lingual surface of lower teeth from canine to canine with occlusal rests on mesial side of first premolars. For better retention, a thin layer of feldspathic ceramic was added to the inner surface of the bar and cemented with two dual-cured cement types. The patient's complaint subsided 6 weeks after cementation. Clinical evaluation appeared to give good functional value where the marginal fit of digitized CAD/CAM design and glazed surface offered an enhanced approach of fixed retention. PMID:28819572

  15. Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations

    NASA Astrophysics Data System (ADS)

    Cooperman, Joshua H.

    2018-05-01

    The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral dimension as a physical observable with which to delineate renormalization group trajectories in the hope of taking a continuum limit of causal dynamical triangulations at a nontrivial ultraviolet fixed point (Ambjørn et al 2016 Phys. Rev. D 93 104032, 2014 Class. Quantum Grav. 31 165003, Cooperman 2016 Gen. Relativ. Gravit. 48 1, Cooperman 2016 arXiv:1604.01798, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151).

  16. Scale invariance and universality in economic phenomena

    NASA Astrophysics Data System (ADS)

    Stanley, H. E.; Amaral, L. A. N.; Gopikrishnan, P.; Plerou, V.; Salinger, M. A.

    2002-03-01

    This paper discusses some of the similarities between work being done by economists and by computational physicists seeking to contribute to economics. We also mention some of the differences in the approaches taken and seek to justify these different approaches by developing the argument that by approaching the same problem from different points of view, new results might emerge. In particular, we review two such new results. Specifically, we discuss the two newly discovered scaling results that appear to be `universal', in the sense that they hold for widely different economies as well as for different time periods: (i) the fluctuation of price changes of any stock market is characterized by a probability density function, which is a simple power law with exponent -4 extending over 102 standard deviations (a factor of 108 on the y-axis); this result is analogous to the Gutenberg-Richter power law describing the histogram of earthquakes of a given strength; (ii) for a wide range of economic organizations, the histogram that shows how size of organization is inversely correlated to fluctuations in size with an exponent ≈0.2. Neither of these two new empirical laws has a firm theoretical foundation. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behaviour of the response function at the critical point (zero magnetic field) leads to large fluctuations. We discuss a curious `symmetry breaking' for values of Σ above a certain threshold value Σc here Σ is defined to be the local first moment of the probability distribution of demand Ω - the difference between the number of shares traded in buyer-initiated and seller-initiated trades. This feature is qualitatively identical to the behaviour of the probability density of the magnetization for fixed values of the inverse temperature.

  17. Momentum-space resummation for transverse observables and the Higgs p ⊥ at N3LL+NNLO

    NASA Astrophysics Data System (ADS)

    Bizoń, Wojciech; Monni, Pier Francesco; Re, Emanuele; Rottoli, Luca; Torrielli, Paolo

    2018-02-01

    We present an approach to the momentum-space resummation of global, recursively infrared and collinear safe observables that can vanish away from the Sudakov region. We focus on the hadro-production of a generic colour singlet, and we consider the class of observables that depend only upon the total transverse momentum of the radiation, prime examples being the transverse momentum of the singlet, and ϕ ∗ in Drell-Yan pair production. We derive a resummation formula valid up to next-to-next-to-next-to-leading-logarithmic accuracy for the considered class of observables. We use this result to compute state-of-the-art predictions for the Higgs-boson transverse-momentum spectrum at the LHC at next-to-next-to-next-to-leading-logarithmic accuracy matched to fixed next-to-next-to-leading order. Our resummation formula reduces exactly to the customary resummation performed in impact-parameter space in the known cases, and it also predicts the correct power-behaved scaling of the cross section in the limit of small value of the observable. We show how this formalism is efficiently implemented by means of Monte Carlo techniques in a fully exclusive generator that allows one to apply arbitrary cuts on the Born variables for any colour singlet, as well as to automatically match the resummed results to fixed-order calculations.

  18. Optimized protocol for quantitative multiple reaction monitoring-based proteomic analysis of formalin-fixed, paraffin embedded tissues

    PubMed Central

    Kennedy, Jacob J.; Whiteaker, Jeffrey R.; Schoenherr, Regine M.; Yan, Ping; Allison, Kimberly; Shipley, Melissa; Lerch, Melissa; Hoofnagle, Andrew N.; Baird, Geoffrey Stuart; Paulovich, Amanda G.

    2016-01-01

    Despite a clinical, economic, and regulatory imperative to develop companion diagnostics, precious few new biomarkers have been successfully translated into clinical use, due in part to inadequate protein assay technologies to support large-scale testing of hundreds of candidate biomarkers in formalin-fixed paraffin embedded (FFPE) tissues. While the feasibility of using targeted, multiple reaction monitoring-mass spectrometry (MRM-MS) for quantitative analyses of FFPE tissues has been demonstrated, protocols have not been systematically optimized for robust quantification across a large number of analytes, nor has the performance of peptide immuno-MRM been evaluated. To address this gap, we used a test battery approach coupled to MRM-MS with the addition of stable isotope labeled standard peptides (targeting 512 analytes) to quantitatively evaluate the performance of three extraction protocols in combination with three trypsin digestion protocols (i.e. 9 processes). A process based on RapiGest buffer extraction and urea-based digestion was identified to enable similar quantitation results from FFPE and frozen tissues. Using the optimized protocols for MRM-based analysis of FFPE tissues, median precision was 11.4% (across 249 analytes). There was excellent correlation between measurements made on matched FFPE and frozen tissues, both for direct MRM analysis (R2 = 0.94) and immuno-MRM (R2 = 0.89). The optimized process enables highly reproducible, multiplex, standardizable, quantitative MRM in archival tissue specimens. PMID:27462933

  19. Evaluation of the Community Multi-scale Air Quality (CMAQ) ...

    EPA Pesticide Factsheets

    The Community Multiscale Air Quality (CMAQ) model is a state-of-the-science air quality model that simulates the emission, transport and fate of numerous air pollutants, including ozone and particulate matter. The Computational Exposure Division (CED) of the U.S. Environmental Protection Agency develops the CMAQ model and periodically releases new versions of the model that include bug fixes and various other improvements to the modeling system. In the fall of 2015, CMAQ version 5.1 was released. This new version of CMAQ will contain important bug fixes to several issues that were identified in CMAQv5.0.2 and additionally include updates to other portions of the code. Several annual, and numerous episodic, CMAQv5.1 simulations were performed to assess the impact of these improvements on the model results. These results will be presented, along with a base evaluation of the performance of the CMAQv5.1 modeling system against available surface and upper-air measurements available during the time period simulated. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, proces

  20. Radiofrequency energy deposition and radiofrequency power requirements in parallel transmission with increasing distance from the coil to the sample.

    PubMed

    Deniz, Cem M; Vaidya, Manushka V; Sodickson, Daniel K; Lattanzi, Riccardo

    2016-01-01

    We investigated global specific absorption rate (SAR) and radiofrequency (RF) power requirements in parallel transmission as the distance between the transmit coils and the sample was increased. We calculated ultimate intrinsic SAR (UISAR), which depends on object geometry and electrical properties but not on coil design, and we used it as the reference to compare the performance of various transmit arrays. We investigated the case of fixing coil size and increasing the number of coils while moving the array away from the sample, as well as the case of fixing coil number and scaling coil dimensions. We also investigated RF power requirements as a function of lift-off, and tracked local SAR distributions associated with global SAR optima. In all cases, the target excitation profile was achieved and global SAR (as well as associated maximum local SAR) decreased with lift-off, approaching UISAR, which was constant for all lift-offs. We observed a lift-off value that optimizes the balance between global SAR and power losses in coil conductors. We showed that, using parallel transmission, global SAR can decrease at ultra high fields for finite arrays with a sufficient number of transmit elements. For parallel transmission, the distance between coils and object can be optimized to reduce SAR and minimize RF power requirements associated with homogeneous excitation. © 2015 Wiley Periodicals, Inc.

Top