Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Projection rule for complex-valued associative memory with large constant terms
NASA Astrophysics Data System (ADS)
Kitahara, Michimasa; Kobayashi, Masaki
Complex-valued Associative Memory (CAM) has an inherent property of rotation invariance. Rotation invariance produces many undesirable stable states and reduces the noise robustness of CAM. Constant terms may remove rotation invariance, but if the constant terms are too small, rotation invariance does not vanish. In this paper, we eliminate rotation invariance by introducing large constant terms to complex-valued neurons. We have to make constant terms sufficiently large to improve the noise robustness. We introduce a parameter to control the amplitudes of constant terms into projection rule. The large constant terms are proved to be effective by our computer simulations.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
A predator-prey model with generic birth and death rates for the predator.
Terry, Alan J
2014-02-01
We propose and study a predator-prey model in which the predator has a Holling type II functional response and generic per capita birth and death rates. Given that prey consumption provides the energy for predator activity, and that the predator functional response represents the prey consumption rate per predator, we assume that the per capita birth and death rates for the predator are, respectively, increasing and decreasing functions of the predator functional response. These functions are monotonic, but not necessarily strictly monotonic, for all values of the argument. In particular, we allow the possibility that the predator birth rate is zero for all sufficiently small values of the predator functional response, reflecting the idea that a certain level of energy intake is needed before a predator can reproduce. Our analysis reveals that the model exhibits the behaviours typically found in predator-prey models - extinction of the predator population, convergence to a periodic orbit, or convergence to a co-existence fixed point. For a specific example, in which the predator birth and death rates are constant for all sufficiently small or large values of the predator functional response, we corroborate our analysis with numerical simulations. In the unlikely case where these birth and death rates equal the same constant for all sufficiently large values of the predator functional response, the model is capable of structurally unstable behaviour, with a small change in the initial conditions leading to a more pronounced change in the long-term dynamics. Copyright © 2013 Elsevier Inc. All rights reserved.
Dairy manure biochar as a phosphorus fertilizer
USDA-ARS?s Scientific Manuscript database
Future manure management practices will need to remove large amounts of organic waste as well as harness energy to generate value-added products. Manures can be processed using thermochemical conversion technologies to generate a solid product called biochar. Dairy manure biochars contain sufficient...
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during liquid-propellant deflagration in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that, when the burning rate is realistically allowed to depend on temperature as well as pressure, sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes like pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wave numbers are sufficiently small. This analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wave number perturbations, the intrinsic pulsating instability for small wave numbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Effect of H-wave polarization on laser radar detection of partially convex targets in random media.
El-Ocla, Hosam
2010-07-01
A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.
A Theory of School Achievement: A Quantum View
ERIC Educational Resources Information Center
Phelps, James L.
2012-01-01
In most school achievement research, the relationships between achievement and explanatory variables follow the Newton and Einstein concept/principle and the viewpoint of the macro-observer: Deterministic measures based on the mean value of a sufficiently large number of schools. What if the relationships between achievement and explanatory…
Viscous and Thermal Effects on Hydrodynamic Instability in Liquid-Propellant Combustion
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during the deflagration of liquid propellants in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau, form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that when the burning rate is realistically allowed to depend on temperature as well as pressure, that sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes the pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wavenumbers are sufficiently small. In the present work, this analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wavenumber perturbations, the intrinsic pulsating instability for small wavenumbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
1996-08-01
A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
Gravitational waves and large field inflation
NASA Astrophysics Data System (ADS)
Linde, Andrei
2017-02-01
According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.
NASA Technical Reports Server (NTRS)
Righetti, Pier Giorgio; Casale, Elena; Carter, Daniel; Snyder, Robert S.; Wenisch, Elisabeth; Faupel, Michel
1990-01-01
Recombinant-DNA (deoxyribonucleic acid) (r-DNA) proteins, produced in large quantities for human consumption, are now available in sufficient amounts for crystal growth. Crystallographic analysis is the only method now available for defining the atomic arrangements within complex biological molecules and decoding, e.g., the structure of the active site. Growing protein crystals in microgravity has become an important aspect of biology in space, since crystals that are large enough and of sufficient quality to permit complete structure determinations are usually obtained. However even small amounts of impurities in a protein preparation are anathema for the growth of a regular crystal lattice. A multicompartment electrolyzer with isoelectric, immobiline membranes, able to purify large quantities of r-DNA proteins is described. The electrolyzer consists of a stack of flow cells, delimited by membranes of very precise isoelectric point (pI, consisting of polyacrylamide supported by glass fiber filters containing Immobiline buffers and titrants to uniquely define a pI value) and very high buffering power, able to titrate all proteins tangent or crossing such membranes. By properly selecting the pI values of two membranes delimiting a flow chamber, a single protein can be kept isoelectric in a single flow chamber and thus, be purified to homogeneity (by the most stringent criterion, charge homogeneity).
Kato, Dai; Sumimoto, Michinori; Ueda, Akio; Hirono, Shigeru; Niwa, Osamu
2012-12-18
The electrokinetic parameters of all the DNA bases were evaluated using a sputter-deposited nanocarbon film electrode. It is very difficult to evaluate the electrokinetic parameters of DNA bases with conventional electrodes, and particularly those of pyrimidine bases, owing to their high oxidation potentials. Nanocarbon film formed by employing an electron cyclotron resonance sputtering method consists of a nanocrystalline sp(2) and sp(3) mixed bond structure that exhibits a sufficient potential window, very low adsorption of DNA molecules, and sufficient electrochemical activity to oxidize all DNA bases. A precise evaluation of rate constants (k) between all the bases and the electrodes is achieved for the first time by obtaining rotating disc electrode measurements with our nanocarbon film electrode. We found that the k value of each DNA base was dominantly dependent on the surface oxygen-containing group of the nanocarbon film electrode, which was controlled by electrochemical pretreatment. In fact, the treated electrode exhibited optimum k values for all the mononucleotides, namely, 2.0 × 10(-2), 2.5 × 10(-1), 2.6 × 10(-3), and 5.6 × 10(-3) cm s(-1) for GMP, AMP, TMP, and CMP, respectively. The k value of AMP was sufficiently enhanced by up to 33 times with electrochemical pretreatment. We also found the k values for pyrimidine bases to be much lower than those of purine bases although there was no large difference between their diffusion coefficient constants. Moreover, the theoretical oxidation potential values for all the bases coincided with those obtained in electrochemical experiments using our nanocarbon film electrode.
Omega from the anisotropy of the redshift correlation function
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
The study of natural reproduction on burned forest areas
J. A. Larsen
1928-01-01
It is not necessary herein to quote statistics on the areas and values of timberland destroyed each year in the United States. The losses are sufficiently large to attract attention and to present problems in forest management as well as in forest research. The situation is here and every forester must meet it, be he manager or investigator. This paper is an attempt to...
Complex behavior in chains of nonlinear oscillators.
Alonso, Leandro M
2017-06-01
This article outlines sufficient conditions under which a one-dimensional chain of identical nonlinear oscillators can display complex spatio-temporal behavior. The units are described by phase equations and consist of excitable oscillators. The interactions are local and the network is poised to a critical state by balancing excitation and inhibition locally. The results presented here suggest that in networks composed of many oscillatory units with local interactions, excitability together with balanced interactions is sufficient to give rise to complex emergent features. For values of the parameters where complex behavior occurs, the system also displays a high-dimensional bifurcation where an exponentially large number of equilibria are borne in pairs out of multiple saddle-node bifurcations.
Dynamics of some fictitious satellites of Venus and Mars
NASA Astrophysics Data System (ADS)
Yokoyama, Tadashi
1999-05-01
The dynamics of some fictitious satellites of Venus and Mars are studied considering only solar perturbation and the oblateness of the planet, as disturbing forces. Several numerical integrations of the averaged system, taking different values of the obliquity of ecliptic (ε), show the existence of strong chaotic motion, provided that the semi major axis is near a critical value. As a consequence, large increase of eccentricities occur and the satellites may collide with the planet or cross possible internal orbits. Even starting from almost circular and equatorial orbits, most satellites can easily reach prohibitive values. The extension of the chaotic zone depends clearly on the value of ε, so that, previous regular regions may become chaotic, provided ε increases sufficiently.
NASA Technical Reports Server (NTRS)
Hall, Philip; Balakumar, P.
1990-01-01
A class of exact steady and unsteady solutions of the Navier Stokes equations in cylindrical polar coordinates is given. The flows correspond to the motion induced by an infinite disc rotating with constant angular velocity about the z-axis in a fluid occupying a semi-infinite region which, at large distances from the disc, has velocity field proportional to (x,-y,O) with respect to a Cartesian coordinate system. It is shown that when the rate of rotation is large, Karman's exact solution for a disc rotating in an otherwise motionless fluid is recovered. In the limit of zero rotation rate a particular form of Howarth's exact solution for three-dimensional stagnation point flow is obtained. The unsteady form of the partial differential system describing this class of flow may be generalized to time-periodic equilibrium flows. In addition the unsteady equations are shown to describe a strongly nonlinear instability of Karman's rotating disc flow. It is shown that sufficiently large perturbations lead to a finite time breakdown of that flow whilst smaller disturbances decay to zero. If the stagnation point flow at infinity is sufficiently strong, the steady basic states become linearly unstable. In fact there is then a continuous spectrum of unstable eigenvalues of the stability equations but, if the initial value problem is considered, it is found that, at large values of time, the continuous spectrum leads to a velocity field growing exponentially in time with an amplitude decaying algebraically in time.
Serum 25(OH)D seasonality in urologic patients from central Italy.
Calgani, Alessia; Iarlori, Marco; Rizi, Vincenzo; Pace, Gianna; Bologna, Mauro; Vicentini, Carlo; Angelucci, Adriano
2016-09-01
Hypovitaminosis D is increasingly recognized as a cofactor in several diseases. In addition to bone homeostasis, vitamin D status influences immune system, muscle activity and cell differentiation in different tissues. Vitamin D is produced in the skin upon exposure to UVB rays, and sufficient levels of serum 25(OH)D are dependent mostly on adequate sun exposure, and then on specific physiologic variables, including skin type, age and Body Mass Index (BMI). In contrast with common belief, epidemiologic data are demonstrating that hypovitaminosis D must be a clinical concern not only in northern Countries. In our study, we investigated vitamin D status in a male population enrolled in a urology clinic of central Italy. In addition, we evaluated the correlation between vitamin D status and UVB irradiance measured in our region. The two principal pathologies in the 95 enrolled patients (mean age 66years) were benign prostate hypertrophy and prostate carcinoma. >50% of patients had serum 25(OH)D values in the deficient range (<20ng/mL), and only 16% of cases had serum vitamin D concentration higher than 30ng/mL (optimal range). The seasonal stratification of vitamin D concentrations revealed an evident trend with the minimum mean value recorded in April and a maximum mean value obtained in September. UVB irradiance measured by pyranometer in our region (Abruzzo, central Italy) revealed a large difference during the year, with winter months characterized by an UV irradiance about tenfold lower than summer months. Then we applied a mathematical model in order to evaluate the expected vitamin D production according to the standard erythemal dose measured in the different seasons. In winter months, the low available UVB radiation and the small exposed skin area resulted not sufficient to obtain the recommended serum doses of vitamin D. Although in summer months UVB irradiance was largely in excess to produce vitamin D in the skin, serum vitamin D resulted sufficient in September only in those patients who declared an outdoor time of at least 3h per day in the previous summer. In conclusion, hypovitaminosis D is largely represented in elderly persons in our region. Seasonal fluctuation in serum 25(OH)D was explained by a reduced availability of UVB in winter and by insufficient solar exposure in summer. The relatively high outdoor time that emerged to be correlated with sufficient serum 25(OH)D in autumn warrants further studies to individuate potential risk co-variables for hypovitaminosis D in elderly men. Copyright © 2016 Elsevier B.V. All rights reserved.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms.
Ren, Tao; Zhang, Chuan; Lin, Lin; Guo, Meiting; Xie, Xionghang
2014-01-01
We address the scheduling problem for a no-wait flow shop to optimize total completion time with release dates. With the tool of asymptotic analysis, we prove that the objective values of two SPTA-based algorithms converge to the optimal value for sufficiently large-sized problems. To further enhance the performance of the SPTA-based algorithms, an improvement scheme based on local search is provided for moderate scale problems. New lower bound is presented for evaluating the asymptotic optimality of the algorithms. Numerical simulations demonstrate the effectiveness of the proposed algorithms. PMID:24764774
Magnetic Fields Recorded by Chondrules Formed in Nebular Shocks
NASA Astrophysics Data System (ADS)
Mai, Chuhong; Desch, Steven J.; Boley, Aaron C.; Weiss, Benjamin P.
2018-04-01
Recent laboratory efforts have constrained the remanent magnetizations of chondrules and the magnetic field strengths to which the chondrules were exposed as they cooled below their Curie points. An outstanding question is whether the inferred paleofields represent the background magnetic field of the solar nebula or were unique to the chondrule-forming environment. We investigate the amplification of the magnetic field above background values for two proposed chondrule formation mechanisms, large-scale nebular shocks and planetary bow shocks. Behind large-scale shocks, the magnetic field parallel to the shock front is amplified by factors of ∼10–30, regardless of the magnetic diffusivity. Therefore, chondrules melted in these shocks probably recorded an amplified magnetic field. Behind planetary bow shocks, the field amplification is sensitive to the magnetic diffusivity. We compute the gas properties behind a bow shock around a 3000 km radius planetary embryo, with and without atmospheres, using hydrodynamics models. We calculate the ionization state of the hot, shocked gas, including thermionic emission from dust, thermal ionization of gas-phase potassium atoms, and the magnetic diffusivity due to Ohmic dissipation and ambipolar diffusion. We find that the diffusivity is sufficiently large that magnetic fields have already relaxed to background values in the shock downstream where chondrules acquire magnetizations, and that these locations are sufficiently far from the planetary embryos that chondrules should not have recorded a significant putative dynamo field generated on these bodies. We conclude that, if melted in planetary bow shocks, chondrules probably recorded the background nebular field.
Quantum Dynamics of Helium Clusters
1993-03-01
the structure of both these and the HeN clusters in the body fixed frame by computing principal moments of inertia, thereby avoiding the...8217 of helium clusters, with the modification that we subtract 0.96 K from the computed values so that lor sufficiently large clusters we recover the...phonon spectrum of liquid He. To get a picture of these spectra one needs to compute the structure functions 51. Monte Carlo random walk simulations
Bleustein-Gulyaev wave propagation characteristics in KNbO3 and PKN crystals
NASA Astrophysics Data System (ADS)
Dvoesherstov, M. Y.; Cherednick, V. I.; Chirimanov, A. P.; Petrov, S. G.
1999-09-01
In this paper, theoretical investigation is shown for cuts and propagation directions on KNbO3, PKN substrates where the Bleustein-Gulyaev waves exist. The KNbO3 and PKN crystals Y-cut X-propagating relate to the condition in which the stiffened shear horizontal wave and pure mechanical Rayleigh wave are present. In this symmetry orientation the sagittal and transverse particle displacements also uncouple. In this situation, the potential is coupled to the shear horizontal displacements only. Electromechanical coupling coefficients K2 has a sufficiently large value of above 53 percent with a phase velocity of V equals 3.918 km/s for KNbO3 crystals and factor K2 has a large value of above 23.6 percent and phase velocity V equals 3.054 km/s for PKN crystals.
Minimal sufficient positive-operator valued measure on a separable Hilbert space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuramochi, Yui, E-mail: kuramochi.yui.22c@st.kyoto-u.ac.jp
We introduce a concept of a minimal sufficient positive-operator valued measure (POVM), which is the least redundant POVM among the POVMs that have the equivalent information about the measured quantum system. Assuming the system Hilbert space to be separable, we show that for a given POVM, a sufficient statistic called a Lehmann-Scheffé-Bahadur statistic induces a minimal sufficient POVM. We also show that every POVM has an equivalent minimal sufficient POVM and that such a minimal sufficient POVM is unique up to relabeling neglecting null sets. We apply these results to discrete POVMs and information conservation conditions proposed by the author.
Information content analysis: the potential for methane isotopologue retrieval from GOSAT-2
NASA Astrophysics Data System (ADS)
Malina, Edward; Yoshida, Yukio; Matsunaga, Tsuneo; Muller, Jan-Peter
2018-02-01
Atmospheric methane is comprised of multiple isotopic molecules, with the most abundant being 12CH4 and 13CH4, making up 98 and 1.1 % of atmospheric methane respectively. It has been shown that is it possible to distinguish between sources of methane (biogenic methane, e.g. marshland, or abiogenic methane, e.g. fracking) via a ratio of these main methane isotopologues, otherwise known as the δ13C value. δ13C values typically range between -10 and -80 ‰, with abiogenic sources closer to zero and biogenic sources showing more negative values. Initially, we suggest that a δ13C difference of 10 ‰ is sufficient, in order to differentiate between methane source types, based on this we derive that a precision of 0.2 ppbv on 13CH4 retrievals may achieve the target δ13C variance. Using an application of the well-established information content analysis (ICA) technique for assumed clear-sky conditions, this paper shows that using a combination of the shortwave infrared (SWIR) bands on the planned Greenhouse gases Observing SATellite (GOSAT-2) mission, 13CH4 can be measured with sufficient information content to a precision of between 0.7 and 1.2 ppbv from a single sounding (assuming a total column average value of 19.14 ppbv), which can then be reduced to the target precision through spatial and temporal averaging techniques. We therefore suggest that GOSAT-2 can be used to differentiate between methane source types. We find that large unconstrained covariance matrices are required in order to achieve sufficient information content, while the solar zenith angle has limited impact on the information content.
Heavy-flavor parton distributions without heavy-flavor matching prescriptions
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Glazov, Alexandre; Mitov, Alexander; Papanastasiou, Andrew S.; Ubiali, Maria
2018-04-01
We show that the well-known obstacle for working with the zero-mass variable flavor number scheme, namely, the omission of O(1) mass power corrections close to the conventional heavy flavor matching point (HFMP) μ b = m, can be easily overcome. For this it is sufficient to take advantage of the freedom in choosing the position of the HFMP. We demonstrate that by choosing a sufficiently large HFMP, which could be as large as 10 times the mass of the heavy quark, one can achieve the following improvements: 1) above the HFMP the size of missing power corrections O(m) is restricted by the value of μ b and, therefore, the error associated with their omission can be made negligible; 2) additional prescriptions for the definition of cross-sections are not required; 3) the resummation accuracy is maintained and 4) contrary to the common lore we find that the discontinuity of α s and pdfs across thresholds leads to improved continuity in predictions for observables. We have considered a large set of proton-proton and electron-proton collider processes, many through NNLO QCD, that demonstrate the broad applicability of our proposal.
NASA Astrophysics Data System (ADS)
Shimanuki, Masaharu; Aoyagi, Manabu; Tomikawa, Yoshiro
1994-05-01
The present paper deals with the single-resonance longitudinal and torsional vibrator combination-type motor, which is one of the ultrasonic motors with a relatively large torque. To improve the characteristics of this motor, the authors studied the calculation method of the resonance frequencies and designed the motor so that the resonance frequencies of the longitudinal and torsional vibrations were very close to the measured ones, because it was thought that the motor characteristics were largely affected by the degree of approximation of the resonance frequencies. Experimental results have proven that the prototype motor produced large torque with a maximum of 14.0 kgf·cm under a total electrical input power of 30 W; this value was 1.5 times as large as that reported previously. That is, it has been clarified that with sufficient degree of approximation of the resonance frequencies, as mentioned above, the output torque of the motor could be greatly improved; however, its efficiency (maximum of 13.1%) was maintained at almost the same value as before.
Kwuimy, C A Kitio; Nataraj, C; Litak, G
2011-12-01
We consider the problems of chaos and parametric control in nonlinear systems under an asymmetric potential subjected to a multiscale type excitation. The lower bound line for horseshoes chaos is analyzed using the Melnikov's criterion for a transition to permanent or transient nonperiodic motions, complement by the fractal or regular shape of the basin of attraction. Numerical simulations based on the basins of attraction, bifurcation diagrams, Poincaré sections, Lyapunov exponents, and phase portraits are used to show how stationary dissipative chaos occurs in the system. Our attention is focussed on the effects of the asymmetric potential term and the driven frequency. It is shown that the threshold amplitude ∣γ(c)∣ of the excitation decreases for small values of the driven frequency ω and increases for large values of ω. This threshold value decreases with the asymmetric parameter α and becomes constant for sufficiently large values of α. γ(c) has its maximum value for asymmetric load in comparison with the symmetric load. Finally, we apply the Melnikov theorem to the controlled system to explore the gain control parameter dependencies.
On the Value-Dependence of Value-Driven Attentional Capture
Anderson, Brian A.; Halpern, Madeline
2017-01-01
Findings from an increasingly large number of studies have been used to argue that attentional capture can be dependent on the learned value of a stimulus, or value-driven. However, under certain circumstances attention can be biased to select stimuli that previously served as targets, independent of reward history. Value-driven attentional capture, as studied using the training phase-test phase design introduced by Anderson and colleagues, is widely presumed to reflect the combined influence of learned value and selection history. However, the degree to which attentional capture is at all dependent on value learning in this paradigm has recently been questioned. Support for value-dependence can be provided through one of two means: (1) greater attentional capture by prior targets following rewarded training than following unrewarded training, and (2) greater attentional capture by prior targets previously associated with high compared to low value. Using a variant of the original value-driven attentional capture paradigm, Sha and Jiang (2016) failed to find evidence of either, and raised criticisms regarding the adequacy of evidence provided by prior studies using this particular paradigm. To address this disparity, here we provided a stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm. With a sufficiently large sample size, value-dependence was observed based on both criteria, with no evidence of attentional capture without rewards during training. Our findings support the validity of the traditional value-driven attentional capture paradigm in measuring what its name purports to measure. PMID:28176215
1991-01-01
School c. Automobiles : Autos are permitted at all schools. d. Personal items: Personal items of great monetary value or large bulk should not be brought...vehicles is encouraged due to our remoteness. A valid driver’s license, proof of ownership, and current and sufficient automobile insurance to meet...attire. 8. MISCELLANEOUS: NSGA Skaggs Island is relatively isolated and having access to an automobile while here is highly desirable. Commercial
NASA Astrophysics Data System (ADS)
Dorofeeva, Olga V.; Suchkova, Taisiya A.
2018-04-01
The gas-phase enthalpies of formation of four molecules with high flexibility, which leads to the existence of a large number of low-energy conformers, were calculated with the G4 method to see whether the lowest energy conformer is sufficient to achieve high accuracy in the computed values. The calculated values were in good agreement with the experiment, whereas adding the correction for conformer distribution makes the agreement worse. The reason for this effect is a large anharmonicity of low-frequency torsional motions, which is ignored in the calculation of ZPVE and thermal enthalpy. It was shown that the approximate correction for anharmonicity estimated using a free rotor model is of very similar magnitude compared with the conformer correction but has the opposite sign, and thus almost fully compensates for it. Therefore, the common practice of adding only the conformer correction is not without problems.
NASA Astrophysics Data System (ADS)
James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2017-06-01
One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
Sufficient conditions for uniqueness of the weak value
NASA Astrophysics Data System (ADS)
Dressel, J.; Jordan, A. N.
2012-01-01
We review and clarify the sufficient conditions for uniquely defining the generalized weak value as the weak limit of a conditioned average using the contextual values formalism introduced in Dressel, Agarwal and Jordan (2010 Phys. Rev. Lett. 104 240401). We also respond to criticism of our work by Parrott (arXiv:1105.4188v1) concerning a proposed counter-example to the uniqueness of the definition of the generalized weak value. The counter-example does not satisfy our prescription in the case of an underspecified measurement context. We show that when the contextual values formalism is properly applied to this example, a natural interpretation of the measurement emerges and the unique definition in the weak limit holds. We also prove a theorem regarding the uniqueness of the definition under our sufficient conditions for the general case. Finally, a second proposed counter-example by Parrott (arXiv:1105.4188v6) is shown not to satisfy the sufficiency conditions for the provided theorem.
Big Data Analytics for a Smart Green Infrastructure Strategy
NASA Astrophysics Data System (ADS)
Barrile, Vincenzo; Bonfa, Stefano; Bilotta, Giuliana
2017-08-01
As well known, Big Data is a term for data sets so large or complex that traditional data processing applications aren’t sufficient to process them. The term “Big Data” is referred to using predictive analytics. It is often related to user behavior analytics, or other advanced data analytics methods which from data extract value, and rarely to a particular size of data set. This is especially true for the huge amount of Earth Observation data that satellites constantly orbiting the earth daily transmit.
On the value-dependence of value-driven attentional capture.
Anderson, Brian A; Halpern, Madeline
2017-05-01
Findings from an increasingly large number of studies have been used to argue that attentional capture can be dependent on the learned value of a stimulus, or value-driven. However, under certain circumstances attention can be biased to select stimuli that previously served as targets, independent of reward history. Value-driven attentional capture, as studied using the training phase-test phase design introduced by Anderson and colleagues, is widely presumed to reflect the combined influence of learned value and selection history. However, the degree to which attentional capture is at all dependent on value learning in this paradigm has recently been questioned. Support for value-dependence can be provided through one of two means: (1) greater attentional capture by prior targets following rewarded training than following unrewarded training, and (2) greater attentional capture by prior targets previously associated with high compared to low value. Using a variant of the original value-driven attentional capture paradigm, Sha and Jiang (Attention, Perception, and Psychophysics, 78, 403-414, 2016) failed to find evidence of either, and raised criticisms regarding the adequacy of evidence provided by prior studies using this particular paradigm. To address this disparity, here we provided a stringent test of the value-dependence hypothesis using the traditional value-driven attentional capture paradigm. With a sufficiently large sample size, value-dependence was observed based on both criteria, with no evidence of attentional capture without rewards during training. Our findings support the validity of the traditional value-driven attentional capture paradigm in measuring what its name purports to measure.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
The large sample size fallacy.
Lantz, Björn
2013-06-01
Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.
Control Laws for a Dual-Spin Stabilized Platform
NASA Technical Reports Server (NTRS)
Lim, K. B.; Moerder, D. D.
2008-01-01
This paper describes two attitude control laws suitable for atmospheric flight vehicles with a steady angular momentum bias in the vehicle yaw axis. This bias is assumed to be provided by an internal flywheel, and is introduced to enhance roll and pitch stiffness. The first control law is based on Lyapunov stability theory, and stability proofs are given. The second control law, which assumes that the angular momentum bias is large, is based on a classical PID control. It is shown that the large yaw-axis bias requires that the PI feedback component on the roll and pitch angle errors be cross-fed. Both control laws are applied to a vehicle simulation in the presence of disturbances for several values of yaw-axis angular momentum bias. It is seen that both control laws provide a significant improvement in attitude performance when the bias is sufficiently large, but the nonlinear control law is also able to provide improved performance for a small value of bias. This is important because the smaller bias corresponds to a smaller requirement for mass to be dedicated to the flywheel.
Duval, Jérôme F L; Slaveykova, Vera I; Hosse, Monika; Buffle, Jacques; Wilkinson, Kevin J
2006-10-01
The electrostatic, hydrodynamic and conformational properties of aqueous solutions of succinoglycan have been analyzed by fluorescence correlation spectroscopy (FCS), proton titration, and capillary electrophoresis (CE) over a large range of pH values and electrolyte (NaCl) concentrations. Using the theoretical formalism developed previously for the electrokinetic properties of soft, permeable particles, a quantitative analysis for the electro-hydrodynamics of succinoglycan is performed by taking into account, in a self-consistent manner, the measured values of the diffusion coefficients, electric charge densities, and electrophoretic mobilities. For that purpose, two limiting conformations for the polysaccharide in solution are tested, i.e. succinoglycan behaves as (i) a spherical, random coil polymer or (ii) a rodlike particle with charged lateral chains. The results show that satisfactory modeling of the titration data for ionic strengths larger than 50 mM can be accomplished using both geometries over the entire range of pH values. Electrophoretic mobilities measured for sufficiently large pH values (pH > 5-6) are in line with predictions based on either model. The best manner to discriminate between these two conceptual models is briefly discussed. For low pH values (pH < 5), both models indicate aggregation, resulting in an increase of the hydrodynamic permeability and a decrease of the diffusion coefficient.
NASA Astrophysics Data System (ADS)
Gneiser, Martin; Heidemann, Julia; Klier, Mathias; Landherr, Andrea; Probst, Florian
Online social networks have been gaining increasing economic importance in light of the rising number of their users. Numerous recent acquisitions priced at enormous amounts have illustrated this development and revealed the need for adequate business valuation models. The value of an online social network is largely determined by the value of its users, the relationships between these users, and the resulting network effects. Therefore, the interconnectedness of a user within the network has to be considered explicitly to get a reasonable estimate for the economic value. Established standard business valuation models, however, do not sufficiently take these aspects into account. Thus, we propose a measure based on the PageRank-algorithm to quantify users’ interconnectedness in an online social network. This is a first but indispensible step towards an adequate economic valuation of online social networks.
Status of human chromosome aberrations as a biological radiation dosimeter in the nuclear industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, M.A.
1978-01-01
It seems that the determination of peripheral lymphocyte chriomosome aberration levels is now firmly established as a means of biological dosimetry of great value in many phases of the nuclear industry. In the case of large external exposure it can provide valuable quantitative estimates, as well as information on dose distribution and radiation quality. In the case of routine occupational exposures the technique is more qualitative, but is of value particularly in resolving uncertainties as to whether suspected overexposures did in fact occur. Where workers accumulate burdens of internal emitters, aberration analysis provides a valuable, though at present quite qualitativemore » indicator. In spite of the expense of cytogenetic analyses, they are of sufficient value to justify much more widespread application, particularly in high risk situations.« less
Hysteretic transitions in the Kuramoto model with inertia.
Olmi, Simona; Navas, Adrian; Boccaletti, Stefano; Torcini, Alessandro
2014-10-01
We report finite-size numerical investigations and mean-field analysis of a Kuramoto model with inertia for fully coupled and diluted systems. In particular, we examine, for a gaussian distribution of the frequencies, the transition from incoherence to coherence for increasingly large system size and inertia. For sufficiently large inertia the transition is hysteretic, and within the hysteretic region clusters of locked oscillators of various sizes and different levels of synchronization coexist. A modification of the mean-field theory developed by Tanaka, Lichtenberg, and Oishi [Physica D 100, 279 (1997)] allows us to derive the synchronization curve associated to each of these clusters. We have also investigated numerically the limits of existence of the coherent and of the incoherent solutions. The minimal coupling required to observe the coherent state is largely independent of the system size, and it saturates to a constant value already for moderately large inertia values. The incoherent state is observable up to a critical coupling whose value saturates for large inertia and for finite system sizes, while in the thermodinamic limit this critical value diverges proportionally to the mass. By increasing the inertia the transition becomes more complex, and the synchronization occurs via the emergence of clusters of whirling oscillators. The presence of these groups of coherently drifting oscillators induces oscillations in the order parameter. We have shown that the transition remains hysteretic even for randomly diluted networks up to a level of connectivity corresponding to a few links per oscillator. Finally, an application to the Italian high-voltage power grid is reported, which reveals the emergence of quasiperiodic oscillations in the order parameter due to the simultaneous presence of many competing whirling clusters.
NASA Astrophysics Data System (ADS)
Priebe, Elizabeth H.; Neville, C. J.; Rudolph, D. L.
2018-03-01
The spatial coverage of hydraulic conductivity ( K) values for large-scale groundwater investigations is often poor because of the high costs associated with hydraulic testing and the large areas under investigation. Domestic water wells are ubiquitous and their well logs represent an untapped resource of information that includes mandatory specific-capacity tests, from which K can be estimated. These specific-capacity tests are routinely conducted at such low pumping rates that well losses are normally insignificant. In this study, a simple and practical approach to augmenting high-quality K values with reconnaissance-level K values from water-well specific-capacity tests is assessed. The integration of lesser quality K values from specific-capacity tests with a high-quality K data set is assessed through comparisons at two different scales: study-area-wide (a 600-km2 area in Ontario, Canada) and in a single geological formation within a portion of the broader study area (200 km2). Results of the comparisons demonstrate that reconnaissance-level K estimates from specific-capacity tests approximate the ranges and distributions of the high-quality K values. Sufficient detail about the physical basis and assumptions that are invoked in the development of the approach are presented here so that it can be applied with confidence by practitioners seeking to enhance their spatial coverage of K values with specific-capacity tests.
Large, nonsaturating thermopower in a quantizing magnetic field
Fu, Liang
2018-01-01
The thermoelectric effect is the generation of an electrical voltage from a temperature gradient in a solid material due to the diffusion of free charge carriers from hot to cold. Identifying materials with a large thermoelectric response is crucial for the development of novel electric generators and coolers. We theoretically consider the thermopower of Dirac/Weyl semimetals subjected to a quantizing magnetic field. We contrast their thermoelectric properties with those of traditional heavily doped semiconductors and show that, under a sufficiently large magnetic field, the thermopower of Dirac/Weyl semimetals grows linearly with the field without saturation and can reach extremely high values. Our results suggest an immediate pathway for achieving record-high thermopower and thermoelectric figure of merit, and they compare well with a recent experiment on Pb1–xSnxSe. PMID:29806031
Estimation of breeding values using selected pedigree records.
Morton, Richard; Howarth, Jordan M
2005-06-01
Fish bred in tanks or ponds cannot be easily tagged individually. The parentage of any individual may be determined by DNA fingerprinting, but is sufficiently expensive that large numbers cannot be so finger-printed. The measurement of the objective trait can be made on a much larger sample relatively cheaply. This article deals with experimental designs for selecting individuals to be finger-printed and for the estimation of the individual and family breeding values. The general setup provides estimates for both genetic effects regarded as fixed or random and for fixed effects due to known regressors. The family effects can be well estimated when even very small numbers are finger-printed, provided that they are the individuals with the most extreme phenotypes.
Concave utility, transaction costs, and risk in measuring discounting of delayed rewards.
Kirby, Kris N; Santiesteban, Mariana
2003-01-01
Research has consistently found that the decline in the present values of delayed rewards as delay increases is better fit by hyperbolic than by exponential delay-discounting functions. However, concave utility, transaction costs, and risk each could produce hyperbolic-looking data, even when the underlying discounting function is exponential. In Experiments 1 (N = 45) and 2 (N = 103), participants placed bids indicating their present values of real future monetary rewards in computer-based 2nd-price auctions. Both experiments suggest that utility is not sufficiently concave to account for the superior fit of hyperbolic functions. Experiment 2 provided no evidence that the effects of transaction costs and risk are large enough to account for the superior fit of hyperbolic functions.
Limits of Kirchhoff's Laws in Plasmonics.
Razinskas, Gary; Biagioni, Paolo; Hecht, Bert
2018-01-30
The validity of Kirchhoff's laws in plasmonic nanocircuitry is investigated by studying a junction of plasmonic two-wire transmission lines. We find that Kirchhoff's laws are valid for sufficiently small values of a phenomenological parameter κ relating the geometrical parameters of the transmission line with the effective wavelength of the guided mode. Beyond such regime, for large values of the phenomenological parameter, increasing deviations occur and the equivalent impedance description (Kirchhoff's laws) can only provide rough, but nevertheless useful, guidelines for the design of more complex plasmonic circuitry. As an example we investigate a system composed of a two-wire transmission line and a nanoantenna as the load. By addition of a parallel stub designed according to Kirchhoff's laws we achieve maximum signal transfer to the nanoantenna.
Stability analysis of gyroscopic systems with delay via decomposition
NASA Astrophysics Data System (ADS)
Aleksandrov, A. Yu.; Zhabko, A. P.; Chen, Y.
2018-05-01
A mechanical system describing by the second order linear differential equations with a positive parameter at the velocity forces and with time delay in the positional forces is studied. Using the decomposition method and Lyapunov-Krasovskii functionals, conditions are obtained under which from the asymptotic stability of two auxiliary first order subsystems it follows that, for sufficiently large values of the parameter, the original system is also asymptotically stable. Moreover, it is shown that the proposed approach can be applied to the stability investigation of linear gyroscopic systems with switched positional forces.
NASA Astrophysics Data System (ADS)
Kayumov, R. A.; Muhamedova, I. Z.; Tazyukov, B. F.; Shakirzjanov, F. R.
2018-03-01
In this paper, based on the analysis of some experimental data, a study and selection of hereditary models of deformation of reinforced polymeric composite materials, such as organic plastic, carbon plastic and a matrix of film-fabric composite, was pursued. On the basis of an analysis of a series of experiments it has been established that organo-plastic samples behave like viscoelastic bodies. It is shown that for sufficiently large load levels, the behavior of the material in question should be described by the relations of the nonlinear theory of heredity. An attempt to describe the process of deformation by means of linear relations of the theory of heredity leads to large discrepancies between the experimental and calculated deformation values. The use of the theory of accumulation of micro-damages leads to much better description of the experimental results. With the help of the hierarchical approach, a good approximation of the experimental values was successful only in the first three sections of loading.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
Eckermann, Simon; Karnon, Jon; Willan, Andrew R
2010-01-01
Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of partial EVPI may have potential value in refining overall research design. However, Occam's razor must be seriously considered in application of these VOI methods, given their increased complexity and current limitations in informing decision making, with restriction to EVPI rather than EVSI and not allowing for important decision-making contexts. Initial use of CLT methods to focus these more complex partial VOI methods towards where they may be useful in refining optimal overall trial design is suggested. Integrating CLT methods with such partial VOI methods to allow estimation of partial EVSI is suggested in future research to add value to the current VOI toolkit.
NASA Astrophysics Data System (ADS)
Bará, Salvador
2018-01-01
A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.
[Dual process in large number estimation under uncertainty].
Matsumuro, Miki; Miwa, Kazuhisa; Terai, Hitoshi; Yamada, Kento
2016-08-01
According to dual process theory, there are two systems in the mind: an intuitive and automatic System 1 and a logical and effortful System 2. While many previous studies about number estimation have focused on simple heuristics and automatic processes, the deliberative System 2 process has not been sufficiently studied. This study focused on the System 2 process for large number estimation. First, we described an estimation process based on participants’ verbal reports. The task, corresponding to the problem-solving process, consisted of creating subgoals, retrieving values, and applying operations. Second, we investigated the influence of such deliberative process by System 2 on intuitive estimation by System 1, using anchoring effects. The results of the experiment showed that the System 2 process could mitigate anchoring effects.
Jukić, Tomislav; Zimmermann, Michael Bruce; Granić, Roko; Prpić, Marin; Krilić, Drazena; Juresa, Vesna; Katalenić, Marijan; Kusić, Zvonko
2015-12-01
Current methods for assessment of iodine intake in a population comprise measurements of urinary iodine concentration (UIC), thyroid volume by ultrasound (US-Tvol), and newborn TSH. Serum or dried blood spot thyroglobulin (DBS-Tg) is a new promising functional iodine status biomarker in children. In 1996, a new act on universal salt iodination was introduced in Croatia with 25 mg of potassium iodideper kg of salt. In 2002, Croatia finally reached iodine sufficiency. However, in 2009, median UIC in 101 schoolchildren from Zagreb, the capital of Croatia, was 288 µg/L, posing to be excessive. The aim of the study was to assess iodine intake in schoolchildren from the Zagreb area and to evaluate the value of DBS-Tg in schoolchildren as a new functional biomarker of iodine deficiency (and iodine excess). The study was part of a large international study in 6- to 12-year-old children supported by UNICEF, the Swiss Federal Institute of Technology (ETH Zurich) and the International Council for the Control of Iodine Deficiency Disorders (ICCIDD). According to international study results, the median cut-off Tg < 13 µg/L and/or < 3% Tg values > 40 µg/L indicate iodine sufficiency. The study included 159 schoolchildren (median age 9.1 ± 1.4 years) from Zagreb and a nearby small town of Jastrebarsko with measurements of UIC, US-Tvol, DBS-Tg, T4, TSH and iodine content in salt from households of schoolchildren (KI/kg of salt). Overall median UIC was 205 µg/L (range 1-505 µg/L). Thyroid volumes in schoolchildren measured by US were within the normal range according to reference values. Median DBS-Tg in schoolchildren was 12.1 µg/L with 3% of Tg values > 40 µg/L. High Tg values were in the UIC range < 50 µg/L and > 300 µg/L (U-shaped curve of Tg plotted against UIC). All children were euthyroid with geometric mean TSH 0.7 ± 0.3 mU/L and arithmetic mean T4 62 ± 12.5 nmol/L. The mean KI content per kg of salt was 24.9 ± 3.1 mg/kg (range 19-36 mg/kg). Study results indicated iodine sufficiency in schoolchildren from the Zagreb area. Thyroglobulin proved to be a sensitive indicator of both iodine deficiency and iodine excess in children. Iodine content in salt from households of schoolchildren was in good compliance with the Croatian act (20-30 mg KI/kg of salt).
New Insights into Handling Missing Values in Environmental Epidemiological Studies
Roda, Célina; Nicolis, Ioannis; Momas, Isabelle; Guihenneuc, Chantal
2014-01-01
Missing data are unavoidable in environmental epidemiologic surveys. The aim of this study was to compare methods for handling large amounts of missing values: omission of missing values, single and multiple imputations (through linear regression or partial least squares regression), and a fully Bayesian approach. These methods were applied to the PARIS birth cohort, where indoor domestic pollutant measurements were performed in a random sample of babies' dwellings. A simulation study was conducted to assess performances of different approaches with a high proportion of missing values (from 50% to 95%). Different simulation scenarios were carried out, controlling the true value of the association (odds ratio of 1.0, 1.2, and 1.4), and varying the health outcome prevalence. When a large amount of data is missing, omitting these missing data reduced statistical power and inflated standard errors, which affected the significance of the association. Single imputation underestimated the variability, and considerably increased risk of type I error. All approaches were conservative, except the Bayesian joint model. In the case of a common health outcome, the fully Bayesian approach is the most efficient approach (low root mean square error, reasonable type I error, and high statistical power). Nevertheless for a less prevalent event, the type I error is increased and the statistical power is reduced. The estimated posterior distribution of the OR is useful to refine the conclusion. Among the methods handling missing values, no approach is absolutely the best but when usual approaches (e.g. single imputation) are not sufficient, joint modelling approach of missing process and health association is more efficient when large amounts of data are missing. PMID:25226278
Absolute instability of the Gaussian wake profile
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.; Aggarwal, Arun K.
1987-01-01
Linear parallel-flow stability theory has been used to investigate the effect of viscosity on the local absolute instability of a family of wake profiles with a Gaussian velocity distribution. The type of local instability, i.e., convective or absolute, is determined by the location of a branch-point singularity with zero group velocity of the complex dispersion relation for the instability waves. The effects of viscosity were found to be weak for values of the wake Reynolds number, based on the center-line velocity defect and the wake half-width, larger than about 400. Absolute instability occurs only for sufficiently large values of the center-line wake defect. The critical value of this parameter increases with decreasing wake Reynolds number, thereby indicating a shrinking region of absolute instability with decreasing wake Reynolds number. If backflow is not allowed, absolute instability does not occur for wake Reynolds numbers smaller than about 38.
Radiative PQ breaking and the Higgs boson mass
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio
2015-06-01
The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.
Flow over a membrane-covered, fluid-filled cavity.
Thomson, Scott L; Mongeau, Luc; Frankel, Steven H
2007-01-01
The flow-induced response of a membrane covering a fluid-filled cavity located in a section of a rigid-walled channel was explored using finite element analysis. The membrane was initially aligned with the channel wall and separated the channel fluid from the cavity fluid. As fluid flowed over the membrane-covered cavity, a streamwise-dependent transmural pressure gradient caused membrane deformation. This model has application to synthetic models of the vocal fold cover layer used in voice production research. In this paper, the model is introduced and responses of the channel flow, the membrane, and the cavity flow are summarized for a range of flow and membrane parameters. It is shown that for high values of cavity fluid viscosity, the intracavity pressure and the beam deflection both reached steady values. For combinations of low cavity viscosity and sufficiently large upstream pressures, large-amplitude membrane vibrations resulted. Asymmetric conditions were introduced by creating cavities on opposing sides of the channel and assigning different stiffness values to the two membranes. The asymmetry resulted in reduction in or cessation of vibration amplitude, depending on the degree of asymmetry, and in significant skewing of the downstream flow field.
Scale Effects on Magnet Systems of Heliotron-Type Reactors
NASA Astrophysics Data System (ADS)
S, Imagawa; A, Sagara
2005-02-01
For power plants heliotron-type reactors have attractive advantages, such as no current-disruptions, no current-drive, and wide space between helical coils for the maintenance of in-vessel components. However, one disadvantage is that a major radius has to be large enough to obtain large Q-value or to produce sufficient space for blankets. Although the larger radius is considered to increase the construction cost, the influence has not been understood clearly, yet. Scale effects on superconducting magnet systems have been estimated under the conditions of a constant energy confinement time and similar geometrical parameters. Since the necessary magnetic field with a larger radius becomes lower, the increase rate of the weight of the coil support to the major radius is less than the square root. The necessary major radius will be determined mainly by the blanket space. The appropriate major radius will be around 13 m for a reactor similar to the Large Helical Device (LHD).
Collisionless microtearing modes in hot tokamaks: Effect of trapped electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swamy, Aditya K.; Ganesh, R., E-mail: ganesh@ipr.res.in; Brunner, S.
2015-07-15
Collisionless microtearing modes have recently been found linearly unstable in sharp temperature gradient regions of large aspect ratio tokamaks. The magnetic drift resonance of passing electrons has been found to be sufficient to destabilise these modes above a threshold plasma β. A global gyrokinetic study, including both passing electrons as well as trapped electrons, shows that the non-adiabatic contribution of the trapped electrons provides a resonant destabilization, especially at large toroidal mode numbers, for a given aspect ratio. The global 2D mode structures show important changes to the destabilising electrostatic potential. The β threshold for the onset of the instabilitymore » is found to be generally downshifted by the inclusion of trapped electrons. A scan in the aspect ratio of the tokamak configuration, from medium to large but finite values, clearly indicates a significant destabilizing contribution from trapped electrons at small aspect ratio, with a diminishing role at larger aspect ratios.« less
Robust calibration of an optical-lattice depth based on a phase shift
NASA Astrophysics Data System (ADS)
Cabrera-Gutiérrez, C.; Michon, E.; Brunaud, V.; Kawalec, T.; Fortun, A.; Arnal, M.; Billy, J.; Guéry-Odelin, D.
2018-04-01
We report on a method to calibrate the depth of an optical lattice. It consists of triggering the intrasite dipole mode of the cloud by a sudden phase shift. The corresponding oscillatory motion is directly related to the interband frequencies on a large range of lattice depths. Remarkably, for a moderate displacement, a single frequency dominates the oscillation of the zeroth and first orders of the interference pattern observed after a sufficiently long time of flight. The method is robust against atom-atom interactions and the exact value of the extra weak external confinement superimposed to the optical lattice.
Scale-invariance underlying the logistic equation and its social applications
NASA Astrophysics Data System (ADS)
Hernando, A.; Plastino, A.
2013-01-01
On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.
COBE DMR-normalized open inflation cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.
1995-01-01
A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.
Piezoelectric coefficients of bulk 3R transition metal dichalcogenides
NASA Astrophysics Data System (ADS)
Konabe, Satoru; Yamamoto, Takahiro
2017-09-01
The piezoelectric properties of bulk transition metal dichalcogenides (TMDCs) with a 3R structure were investigated using first-principles calculations based on density functional theory combined with the Berry phase treatment. Values for the elastic constant Cijkl , the piezoelectric coefficient eijk , and the piezoelectric coefficient dijk are given for bulk 3R-TMDCs (MoS2, MoSe2, WS2, and WSe2). The piezoelectric coefficients of bulk 3R-TMDCs are shown to be sufficiently large or comparable to those of conventional bulk piezoelectric materials such as α-quartz, wurtzite GaN, and wurtzite AlN.
The effect of viscoelasticity on the stability of a pulmonary airway liquid layer
NASA Astrophysics Data System (ADS)
Halpern, David; Fujioka, Hideki; Grotberg, James B.
2010-01-01
The lungs consist of a network of bifurcating airways that are lined with a thin liquid film. This film is a bilayer consisting of a mucus layer on top of a periciliary fluid layer. Mucus is a non-Newtonian fluid possessing viscoelastic characteristics. Surface tension induces flows within the layer, which may cause the lung's airways to close due to liquid plug formation if the liquid film is sufficiently thick. The stability of the liquid layer is also influenced by the viscoelastic nature of the liquid, which is modeled using the Oldroyd-B constitutive equation or as a Jeffreys fluid. To examine the role of mucus alone, a single layer of a viscoelastic fluid is considered. A system of nonlinear evolution equations is derived using lubrication theory for the film thickness and the film flow rate. A uniform film is initially perturbed and a normal mode analysis is carried out that shows that the growth rate g for a viscoelastic layer is larger than for a Newtonian fluid with the same viscosity. Closure occurs if the minimum core radius, Rmin(t), reaches zero within one breath. Solutions of the nonlinear evolution equations reveal that Rmin normally decreases to zero faster with increasing relaxation time parameter, the Weissenberg number We. For small values of the dimensionless film thickness parameter ɛ, the closure time, tc, increases slightly with We, while for moderate values of ɛ, ranging from 14% to 18% of the tube radius, tc decreases rapidly with We provided the solvent viscosity is sufficiently small. Viscoelasticity was found to have little effect for ɛ >0.18, indicating the strong influence of surface tension. The film thickness parameter ɛ and the Weissenberg number We also have a significant effect on the maximum shear stress on tube wall, max(τw), and thus, potentially, an impact on cell damage. Max(τw) increases with ɛ for fixed We, and it decreases with increasing We for small We provided the solvent viscosity parameter is sufficiently small. For large ɛ ≈0.2, there is no significant difference between the Newtonian flow case and the large We cases.
Multistationarity in mass action networks with applications to ERK activation.
Conradi, Carsten; Flockerzi, Dietrich
2012-07-01
Ordinary Differential Equations (ODEs) are an important tool in many areas of Quantitative Biology. For many ODE systems multistationarity (i.e. the existence of at least two positive steady states) is a desired feature. In general establishing multistationarity is a difficult task as realistic biological models are large in terms of states and (unknown) parameters and in most cases poorly parameterized (because of noisy measurement data of few components, a very small number of data points and only a limited number of repetitions). For mass action networks establishing multistationarity hence is equivalent to establishing the existence of at least two positive solutions of a large polynomial system with unknown coefficients. For mass action networks with certain structural properties, expressed in terms of the stoichiometric matrix and the reaction rate-exponent matrix, we present necessary and sufficient conditions for multistationarity that take the form of linear inequality systems. Solutions of these inequality systems define pairs of steady states and parameter values. We also present a sufficient condition to identify networks where the aforementioned conditions hold. To show the applicability of our results we analyse an ODE system that is defined by the mass action network describing the extracellular signal-regulated kinase (ERK) cascade (i.e. ERK-activation).
Stellar evolution of high mass based on the Ledoux criterion for convection
NASA Technical Reports Server (NTRS)
Stothers, R.; Chin, C.
1972-01-01
Theoretical evolutionary sequences of models for stars of 15 and 30 solar masses were computed from the zero-age main sequence to the end of core helium burning. During the earliest stages of core helium depletion, the envelope rapidly expands into the red-supergiant configuration. At 15 solar mass, a blue loop on the H-R diagram ensues if the initial metals abundance, initial helium abundance, or C-12 + alpha particle reaction rate is sufficiently large, or if the 3-alpha reaction rate is sufficiently small. These quantities affect the opacity of the base of the outer convection zone, the mass of the core, and the thermal properties of the core. The blue loop occurs abruptly and fully developed when the critical value of any of these quantities is exceeded, and the effective temperature range and fraction of the lifetime of core helium burning during the slow phase of the blue loop vary surprisingly little. At 30 solar mass no blue loop occurs for any reasonable set of input parameters.
Multiple D3-Instantons and Mock Modular Forms II
NASA Astrophysics Data System (ADS)
Alexandrov, Sergei; Banerjee, Sibasish; Manschot, Jan; Pioline, Boris
2018-03-01
We analyze the modular properties of D3-brane instanton corrections to the hypermultiplet moduli space in type IIB string theory compactified on a Calabi-Yau threefold. In Part I, we found a necessary condition for the existence of an isometric action of S-duality on this moduli space: the generating function of DT invariants in the large volume attractor chamber must be a vector-valued mock modular form with specified modular properties. In this work, we prove that this condition is also sufficient at two-instanton order. This is achieved by producing a holomorphic action of {SL(2,Z)} on the twistor space which preserves the holomorphic contact structure. The key step is to cancel the anomalous modular variation of the Darboux coordinates by a local holomorphic contact transformation, which is generated by a suitable indefinite theta series. For this purpose we introduce a new family of theta series of signature (2, n - 2), find their modular completion, and conjecture sufficient conditions for their convergence, which may be of independent mathematical interest.
Scalar field dark matter with spontaneous symmetry breaking and the 3.5 keV line
NASA Astrophysics Data System (ADS)
Cosme, Catarina; Rosa, João G.; Bertolami, O.
2018-06-01
We show that the present dark matter abundance can be accounted for by an oscillating scalar field that acquires both mass and a non-zero expectation value from interactions with the Higgs field. The dark matter scalar field can be sufficiently heavy during inflation, due to a non-minimal coupling to gravity, so as to avoid the generation of large isocurvature modes in the CMB anisotropies spectrum. The field begins oscillating after reheating, behaving as radiation until the electroweak phase transition and afterwards as non-relativistic matter. The scalar field becomes unstable, although sufficiently long-lived to account for dark matter, due to mass mixing with the Higgs boson, decaying mainly into photon pairs for masses below the MeV scale. In particular, for a mass of ∼7 keV, which is effectively the only free parameter, the model predicts a dark matter lifetime compatible with the recent galactic and extragalactic observations of a 3.5 keV X-ray line.
Deu, Edgar; Yang, Zhimou; Wang, Flora; Klemba, Michael; Bogyo, Matthew
2010-01-01
Background High throughput screening (HTS) is one of the primary tools used to identify novel enzyme inhibitors. However, its applicability is generally restricted to targets that can either be expressed recombinantly or purified in large quantities. Methodology and Principal Findings Here, we described a method to use activity-based probes (ABPs) to identify substrates that are sufficiently selective to allow HTS in complex biological samples. Because ABPs label their target enzymes through the formation of a permanent covalent bond, we can correlate labeling of target enzymes in a complex mixture with inhibition of turnover of a substrate in that same mixture. Thus, substrate specificity can be determined and substrates with sufficiently high selectivity for HTS can be identified. In this study, we demonstrate this method by using an ABP for dipeptidyl aminopeptidases to identify (Pro-Arg)2-Rhodamine as a specific substrate for DPAP1 in Plasmodium falciparum lysates and Cathepsin C in rat liver extracts. We then used this substrate to develop highly sensitive HTS assays (Z’>0.8) that are suitable for use in screening large collections of small molecules (i.e >300,000) for inhibitors of these proteases. Finally, we demonstrate that it is possible to use broad-spectrum ABPs to identify target-specific substrates. Conclusions We believe that this approach will have value for many enzymatic systems where access to large amounts of active enzyme is problematic. PMID:20700487
Intergenerational Transmission of Work Values: A Meta-Analytic Review.
Cemalcilar, Zeynep; Secinti, Ekin; Sumer, Nebi
2018-05-09
Work values act as guiding principles for individuals' work-related behavior. Economic self-sufficiency is an important predictor for psychological well-being in adulthood. Longitudinal research has demonstrated work values to be an important predictor of economic behavior, and consequently of self-sufficiency. Socialization theories designate parents an important role in the socialization of their children to cultural values. Yet, extant literature is limited in demonstrating the role families play on how youth develop agentic pathways and seek self-sufficiency in transition to adulthood. This study presents a meta-analytic review investigating the intergenerational transmission of work values, which is frequently assessed in terms of parent-child value similarities. Thirty studies from 11 countries (N = 19,987; Median child age = 18.15) were included in the analyses. The results revealed a significant effect of parents on their children's work values. Both mothers' and fathers' work values, and their parenting behavior were significantly associated with their children's work values. Yet, similarity of father-child work values decreased as child age increased. Our findings suggest a moderate effect, suggesting the influence of general socio-cultural context, such as generational differences and peer influences, in addition to those of parents on youth's value acquisition. Our systematic review also revealed that, despite its theoretical and practical importance, social science literature is scarce in comprehensive and comparative empirical studies that investigate parent-child work value similarity. We discuss the implications of our findings for labor market and policy makers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peters, Thomas; Girichidis, Philipp; Gatto, Andrea
2015-11-10
The halo of the Milky Way contains a hot plasma with a surface brightness in soft X-rays of the order 10{sup −12} erg cm{sup −2} s{sup −1} deg{sup −2}. The origin of this gas is unclear, but so far numerical models of galactic star formation have failed to reproduce such a large surface brightness by several orders of magnitude. In this paper, we analyze simulations of the turbulent, magnetized, multi-phase interstellar medium including thermal feedback by supernova explosions as well as cosmic-ray feedback. We include a time-dependent chemical network, self-shielding by gas and dust, and self-gravity. Pure thermal feedback alonemore » is sufficient to produce the observed surface brightness, although it is very sensitive to the supernova rate. Cosmic rays suppress this sensitivity and reduce the surface brightness because they drive cooler outflows. Self-gravity has by far the largest effect because it accumulates the diffuse gas in the disk in dense clumps and filaments, so that supernovae exploding in voids can eject a large amount of hot gas into the halo. This can boost the surface brightness by several orders of magnitude. Although our simulations do not reach a steady state, all simulations produce surface brightness values of the same order of magnitude as the observations, with the exact value depending sensitively on the simulation parameters. We conclude that star formation feedback alone is sufficient to explain the origin of the hot halo gas, but measurements of the surface brightness alone do not provide useful diagnostics for the study of galactic star formation.« less
Stable amplitude chimera states in a network of locally coupled Stuart-Landau oscillators
NASA Astrophysics Data System (ADS)
Premalatha, K.; Chandrasekar, V. K.; Senthilvelan, M.; Lakshmanan, M.
2018-03-01
We investigate the occurrence of collective dynamical states such as transient amplitude chimera, stable amplitude chimera, and imperfect breathing chimera states in a locally coupled network of Stuart-Landau oscillators. In an imperfect breathing chimera state, the synchronized group of oscillators exhibits oscillations with large amplitudes, while the desynchronized group of oscillators oscillates with small amplitudes, and this behavior of coexistence of synchronized and desynchronized oscillations fluctuates with time. Then, we analyze the stability of the amplitude chimera states under various circumstances, including variations in system parameters and coupling strength, and perturbations in the initial states of the oscillators. For an increase in the value of the system parameter, namely, the nonisochronicity parameter, the transient chimera state becomes a stable chimera state for a sufficiently large value of coupling strength. In addition, we also analyze the stability of these states by perturbing the initial states of the oscillators. We find that while a small perturbation allows one to perturb a large number of oscillators resulting in a stable amplitude chimera state, a large perturbation allows one to perturb a small number of oscillators to get a stable amplitude chimera state. We also find the stability of the transient and stable amplitude chimera states and traveling wave states for an appropriate number of oscillators using Floquet theory. In addition, we also find the stability of the incoherent oscillation death states.
Stable amplitude chimera states in a network of locally coupled Stuart-Landau oscillators.
Premalatha, K; Chandrasekar, V K; Senthilvelan, M; Lakshmanan, M
2018-03-01
We investigate the occurrence of collective dynamical states such as transient amplitude chimera, stable amplitude chimera, and imperfect breathing chimera states in a locally coupled network of Stuart-Landau oscillators. In an imperfect breathing chimera state, the synchronized group of oscillators exhibits oscillations with large amplitudes, while the desynchronized group of oscillators oscillates with small amplitudes, and this behavior of coexistence of synchronized and desynchronized oscillations fluctuates with time. Then, we analyze the stability of the amplitude chimera states under various circumstances, including variations in system parameters and coupling strength, and perturbations in the initial states of the oscillators. For an increase in the value of the system parameter, namely, the nonisochronicity parameter, the transient chimera state becomes a stable chimera state for a sufficiently large value of coupling strength. In addition, we also analyze the stability of these states by perturbing the initial states of the oscillators. We find that while a small perturbation allows one to perturb a large number of oscillators resulting in a stable amplitude chimera state, a large perturbation allows one to perturb a small number of oscillators to get a stable amplitude chimera state. We also find the stability of the transient and stable amplitude chimera states and traveling wave states for an appropriate number of oscillators using Floquet theory. In addition, we also find the stability of the incoherent oscillation death states.
Autonomy, religious values, and refusal of lifesaving medical treatment.
Wreen, M J
1991-09-01
The principal question of this paper is: Why are religious values special in refusal of lifesaving medical treatment? This question is approached through a critical examination of a common kind of refusal of treatment case, one involving a rational adult. The central value cited in defence of honouring such a patient's refusal is autonomy. Once autonomy is isolated from other justificatory factors, however, possible cases can be imagined which cast doubt on the great valuational weight assigned it by strong anti-paternalists. This weight is sufficient, in their estimation, to justify honouring the patient's refusal. There is thus a tension between the strong anti-paternalist's commitment to the sufficiency of autonomy and our intuitions respecting such cases. Attempts can be made to relieve this tension, such as arguing that patients aren't really rational in the circumstances envisaged, or that other values, such as privacy or bodily integrity, if added to autonomy, are sufficient to justify an anti-paternalistic stance. All such attempts fail, however. But what does not fail is the addition of religious freedom, freedom respecting a patient's religious beliefs and values. Why religious freedom reduces the tension is then explained, and the specialness of religious beliefs and values examined.
Autonomy, religious values, and refusal of lifesaving medical treatment.
Wreen, M J
1991-01-01
The principal question of this paper is: Why are religious values special in refusal of lifesaving medical treatment? This question is approached through a critical examination of a common kind of refusal of treatment case, one involving a rational adult. The central value cited in defence of honouring such a patient's refusal is autonomy. Once autonomy is isolated from other justificatory factors, however, possible cases can be imagined which cast doubt on the great valuational weight assigned it by strong anti-paternalists. This weight is sufficient, in their estimation, to justify honouring the patient's refusal. There is thus a tension between the strong anti-paternalist's commitment to the sufficiency of autonomy and our intuitions respecting such cases. Attempts can be made to relieve this tension, such as arguing that patients aren't really rational in the circumstances envisaged, or that other values, such as privacy or bodily integrity, if added to autonomy, are sufficient to justify an anti-paternalistic stance. All such attempts fail, however. But what does not fail is the addition of religious freedom, freedom respecting a patient's religious beliefs and values. Why religious freedom reduces the tension is then explained, and the specialness of religious beliefs and values examined. PMID:1941952
Sound synchronization of bubble trains in a viscous fluid: experiment and modeling.
Pereira, Felipe Augusto Cardoso; Baptista, Murilo da Silva; Sartorelli, José Carlos
2014-10-01
We investigate the dynamics of formation of air bubbles expelled from a nozzle immersed in a viscous fluid under the influence of sound waves. We have obtained bifurcation diagrams by measuring the time between successive bubbles, having the air flow (Q) as a parameter control for many values of the sound wave amplitude (A), the height (H) of the solution above the top of the nozzle, and three values of the sound frequency (fs). Our parameter spaces (Q,A) revealed a scenario for the onset of synchronization dominated by Arnold tongues (frequency locking) which gives place to chaotic phase synchronization for sufficiently large A. The experimental results were accurately reproduced by numerical simulations of a model combining a simple bubble growth model for the bubble train and a coupling term with the sound wave added to the equilibrium pressure.
Stability of Nonlinear Systems with Unknown Time-varying Feedback Delay
NASA Astrophysics Data System (ADS)
Chunodkar, Apurva A.; Akella, Maruthi R.
2013-12-01
This paper considers the problem of stabilizing a class of nonlinear systems with unknown bounded delayed feedback wherein the time-varying delay is 1) piecewise constant 2) continuous with a bounded rate. We also consider application of these results to the stabilization of rigid-body attitude dynamics. In the first case, the time-delay in feedback is modeled specifically as a switch among an arbitrarily large set of unknown constant values with a known strict upper bound. The feedback is a linear function of the delayed states. In the case of linear systems with switched delay feedback, a new sufficiency condition for average dwell time result is presented using a complete type Lyapunov-Krasovskii (L-K) functional approach. Further, the corresponding switched system with nonlinear perturbations is proven to be exponentially stable inside a well characterized region of attraction for an appropriately chosen average dwell time. In the second case, the concept of the complete type L-K functional is extended to a class of nonlinear time-delay systems with unknown time-varying time-delay. This extension ensures stability robustness to time-delay in the control design for all values of time-delay less than the known upper bound. Model-transformation is used in order to partition the nonlinear system into a nominal linear part that is exponentially stable with a bounded perturbation. We obtain sufficient conditions which ensure exponential stability inside a region of attraction estimate. A constructive method to evaluate the sufficient conditions is presented together with comparison with the corresponding constant and piecewise constant delay. Numerical simulations are performed to illustrate the theoretical results of this paper.
ASSESSING THE IMPORTANCE OF THERMAL REFUGE ...
Salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. The importance of cold water refuges for migrating adult salmon and steelhead may seem intuitive, and refuges are clearly used by fish during warm water episodes. But quantifying the value of both small and large scale thermal features to salmon populations has been challenging due to the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We discuss the challenges and opportunities to simulating fish behaviors and linking exposures to migratory and reproductive fitness. In this talk and companion poster, we describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in the Columbia River. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effects of warm waters include impacts to salmon and steelhead populations that may already be stressed by habitat alteration, disease, predation, and fishing pressures. Much effort is being expended to improve conditions for salmon and steelhea
NASA Technical Reports Server (NTRS)
Burris, John; McGee, Thomas; Hoegy, Walt; Newman, Paul; Lait, Leslie; Twigg, Laurence; Sumnicht, Grant; Heaps, William; Hostetler, Chris; Neuber, Roland;
2001-01-01
NASA Goddard Space Flight Center's Airborne Raman Ozone, Temperature and Aerosol Lidar (AROTEL) measured extremely cold temperatures during all three deployments (December 1-16, 1999, January 14-29, 2000 and February 27-March 15, 2000) of the Sage III Ozone Loss and Validation Experiment (SOLVE). Temperatures were significantly below values observed in previous years with large regions regularly below 191 K and frequent temperature retrievals yielding values at or below 187 K. Temperatures well below the saturation point of type I polar stratospheric clouds (PSCs) were regularly encountered but their presence was not well correlated with PSCs observed by the NASA Langley Research Center's Aerosol Lidar co-located with AROTEL. Temperature measurements by meteorological sondes launched within areas traversed by the DC-8 showed minimum temperatures consistent in time and vertical extent with those derived from AROTEL data. Calculations to establish whether PSCs could exist at measured AROTEL temperatures and observed mixing ratios of nitric acid and water vapor showed large regions favorable to PSC formation. On several occasions measured AROTEL temperatures up to 10 K below the NAT saturation temperature were insufficient to produce PSCs even though measured values of nitric acid and water were sufficient for their formation.
NASA Astrophysics Data System (ADS)
Zheng, Gong-Ping; Li, Pin; Li, Ting; Xue, Ya-Jie
2018-02-01
Motivated by the recent experiments realized in a flat-bottomed optical trap (Navon et al., 2015; Chomaz et al., 2015), we study the ground state of polar-core spin vortex of quasi-2D ferromagnetic spin-1 condensate in a finite-size homogeneous trap with a weak magnetic field. The exact spatial distribution of local spin is obtained with a variational method. Unlike the fully-magnetized planar spin texture with a zero-spin core, which was schematically demonstrated in previous studies for the ideal polar-core spin vortex in a homogeneous trap with infinitely large boundary, some plateaus and two-cores structure emerge in the distribution curves of spin magnitude in the polar-core spin vortex we obtained for the larger effective spin-dependent interaction. More importantly, the spin values of the plateaus are not 1 as expected in the fully-magnetized spin texture, except for the sufficiently large spin-dependent interaction and the weak-magnetic-field limit. We attribute the decrease of spin value to the effect of finite size of the system. The spin values of the plateaus can be controlled by the quadratic Zeeman energy q of the weak magnetic field, which decreases with the increase of q.
The multiple facets of Peto's paradox: a life-history model for the evolution of cancer suppression
Brown, Joel S.; Cunningham, Jessica J.; Gatenby, Robert A.
2015-01-01
Large animals should have higher lifetime probabilities of cancer than small animals because each cell division carries an attendant risk of mutating towards a tumour lineage. However, this is not observed—a (Peto's) paradox that suggests large and/or long-lived species have evolved effective cancer suppression mechanisms. Using the Euler–Lotka population model, we demonstrate the evolutionary value of cancer suppression as determined by the ‘cost’ (decreased fecundity) of suppression verses the ‘cost’ of cancer (reduced survivorship). Body size per se will not select for sufficient cancer suppression to explain the paradox. Rather, cancer suppression should be most extreme when the probability of non-cancer death decreases with age (e.g. alligators), maturation is delayed, fecundity rates are low and fecundity increases with age. Thus, the value of cancer suppression is predicted to be lowest in the vole (short lifespan, high fecundity) and highest in the naked mole rat (long lived with late female sexual maturity). The life history of pre-industrial humans likely selected for quite low levels of cancer suppression. In modern humans that live much longer, this level results in unusually high lifetime cancer risks. The model predicts a lifetime risk of 49% compared with the current empirical value of 43%. PMID:26056365
A mechanism for value-sensitive decision-making.
Pais, Darren; Hogan, Patrick M; Schlegel, Thomas; Franks, Nigel R; Leonard, Naomi E; Marshall, James A R
2013-01-01
We present a dynamical systems analysis of a decision-making mechanism inspired by collective choice in house-hunting honeybee swarms, revealing the crucial role of cross-inhibitory 'stop-signalling' in improving the decision-making capabilities. We show that strength of cross-inhibition is a decision-parameter influencing how decisions depend both on the difference in value and on the mean value of the alternatives; this is in contrast to many previous mechanistic models of decision-making, which are typically sensitive to decision accuracy rather than the value of the option chosen. The strength of cross-inhibition determines when deadlock over similarly valued alternatives is maintained or broken, as a function of the mean value; thus, changes in cross-inhibition strength allow adaptive time-dependent decision-making strategies. Cross-inhibition also tunes the minimum difference between alternatives required for reliable discrimination, in a manner similar to Weber's law of just-noticeable difference. Finally, cross-inhibition tunes the speed-accuracy trade-off realised when differences in the values of the alternatives are sufficiently large to matter. We propose that the model, and the significant role of the values of the alternatives, may describe other decision-making systems, including intracellular regulatory circuits, and simple neural circuits, and may provide guidance in the design of decision-making algorithms for artificial systems, particularly those functioning without centralised control.
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
Spatiotemporal evolution in a (2+1)-dimensional chemotaxis model
NASA Astrophysics Data System (ADS)
Banerjee, Santo; Misra, Amar P.; Rondoni, L.
2012-01-01
Simulations are performed to investigate the nonlinear dynamics of a (2+1)-dimensional chemotaxis model of Keller-Segel (KS) type, with a logistic growth term. Because of its ability to display auto-aggregation, the KS model has been widely used to simulate self-organization in many biological systems. We show that the corresponding dynamics may lead to steady-states, to divergencies in a finite time as well as to the formation of spatiotemporal irregular patterns. The latter, in particular, appears to be chaotic in part of the range of bounded solutions, as demonstrated by the analysis of wavelet power spectra. Steady-states are achieved with sufficiently large values of the chemotactic coefficient (χ) and/or with growth rates r below a critical value rc. For r>rc, the solutions of the differential equations of the model diverge in a finite time. We also report on the pattern formation regime, for different values of χ, r and of the diffusion coefficient D.
Critical gravitational collapse with angular momentum. II. Soft equations of state
NASA Astrophysics Data System (ADS)
Gundlach, Carsten; Baumgarte, Thomas W.
2018-03-01
We study critical phenomena in the collapse of rotating ultrarelativistic perfect fluids, in which the pressure P is related to the total energy density ρ by P =κ ρ , where κ is a constant. We generalize earlier results for radiation fluids with κ =1 /3 to other values of κ , focusing on κ <1 /9 . For 1 /9 <κ ≲0.49 , the critical solution has only one unstable, growing mode, which is spherically symmetric. For supercritical data it controls the black-hole mass, while for subcritical data it controls the maximum density. For κ <1 /9 , an additional axial l =1 mode becomes unstable. This controls either the black-hole angular momentum, or the maximum angular velocity. In theory, the additional unstable l =1 mode changes the nature of the black-hole threshold completely: at sufficiently large initial rotation rates Ω and sufficient fine-tuning of the initial data to the black-hole threshold we expect to observe nontrivial universal scaling functions (familiar from critical phase transitions in thermodynamics) governing the black-hole mass and angular momentum, and, with further fine-tuning, eventually a finite black-hole mass almost everywhere on the threshold. In practice, however, the second unstable mode grows so slowly that we do not observe this breakdown of scaling at the level of fine-tuning we can achieve, nor systematic deviations from the leading-order power-law scalings of the black-hole mass. We do see systematic effects in the black-hole angular momentum, but it is not clear yet if these are due to the predicted nontrivial scaling functions, or to nonlinear effects at sufficiently large initial angular momentum (which we do not account for in our theoretical model).
VARIANCE ANISOTROPY IN KINETIC PLASMAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, Tulasi N.; Matthaeus, William H.; Oughton, Sean
Solar wind fluctuations admit well-documented anisotropies of the variance matrix, or polarization, related to the mean magnetic field direction. Typically, one finds a ratio of perpendicular variance to parallel variance of the order of 9:1 for the magnetic field. Here we study the question of whether a kinetic plasma spontaneously generates and sustains parallel variances when initiated with only perpendicular variance. We find that parallel variance grows and saturates at about 5% of the perpendicular variance in a few nonlinear times irrespective of the Reynolds number. For sufficiently large systems (Reynolds numbers) the variance approaches values consistent with the solarmore » wind observations.« less
Cherenkov-like emission of Z bosons
NASA Astrophysics Data System (ADS)
Colladay, D.; Noordmans, J. P.; Potting, R.
2017-07-01
We study CPT and Lorentz violation in the electroweak gauge sector of the Standard Model in the context of the Standard-Model Extension (SME). In particular, we show that any non-zero value of a certain relevant Lorentz violation parameter that is thus far unbounded by experiment would imply that for sufficiently large energies one of the helicity modes of the Z boson should propagate with spacelike four-momentum and become stable against decay in vacuum. In this scenario, Cherenkov-like radiation of Z bosons by ultra-high-energy cosmic-ray protons becomes possible. We deduce a bound on the Lorentz violation parameter from the observational data on ultra-high energy cosmic rays.
High-performance etching of multilevel phase-type Fresnel zone plates with large apertures
NASA Astrophysics Data System (ADS)
Guo, Chengli; Zhang, Zhiyu; Xue, Donglin; Li, Longxiang; Wang, Ruoqiu; Zhou, Xiaoguang; Zhang, Feng; Zhang, Xuejun
2018-01-01
To ensure the etching depth uniformity of large-aperture Fresnel zone plates (FZPs) with controllable depths, a combination of a point source ion beam with a dwell-time algorithm has been proposed. According to the obtained distribution of the removal function, the latter can be used to optimize the etching time matrix by minimizing the root-mean-square error between the simulation results and the design value. Owing to the convolution operation in the utilized algorithm, the etching depth error is insensitive to the etching rate fluctuations of the ion beam, thereby reducing the requirement for the etching stability of the ion system. As a result, a 4-level FZP with a circular aperture of 300 mm was fabricated. The obtained results showed that the etching depth uniformity of the full aperture could be reduced to below 1%, which was sufficiently accurate for meeting the use requirements of FZPs. The proposed etching method may serve as an alternative way of etching high-precision diffractive optical elements with large apertures.
Synchronization stability of memristor-based complex-valued neural networks with time delays.
Liu, Dan; Zhu, Song; Ye, Er
2017-12-01
This paper focuses on the dynamical property of a class of memristor-based complex-valued neural networks (MCVNNs) with time delays. By constructing the appropriate Lyapunov functional and utilizing the inequality technique, sufficient conditions are proposed to guarantee exponential synchronization of the coupled systems based on drive-response concept. The proposed results are very easy to verify, and they also extend some previous related works on memristor-based real-valued neural networks. Meanwhile, the obtained sufficient conditions of this paper may be conducive to qualitative analysis of some complex-valued nonlinear delayed systems. A numerical example is given to demonstrate the effectiveness of our theoretical results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pulsating Hydrodynamic Instability in a Dynamic Model of Liquid-Propellant Combustion
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
1999-01-01
Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a nonzero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the disturbance-wavenumber/ pressure-sensitivity plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a nonsteady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
1999-01-01
Hydrodynamic (Landau) instability in combustion is typically associated with the onset of wrinkling of a flame surface, corresponding to the formation of steady cellular structures as the stability threshold is crossed. In the context of liquid-propellant combustion, such instability has recently been shown to occur for critical values of the pressure sensitivity of the burning rate and the disturbance wavenumber, significantly generalizing previous classical results for this problem that assumed a constant normal burning rate. Additionally, however, a pulsating form of hydrodynamic instability has been shown to occur as well, corresponding to the onset of temporal oscillations in the location of the liquid/gas interface. In the present work, we consider the realistic influence of a non-zero temperature sensitivity in the local burning rate on both types of stability thresholds. It is found that for sufficiently small values of this parameter, there exists a stable range of pressure sensitivities for steady, planar burning such that the classical cellular form of hydrodynamic instability and the more recent pulsating form of hydrodynamic instability can each occur as the corresponding stability threshold is crossed. For larger thermal sensitivities, however, the pulsating stability boundary evolves into a C-shaped curve in the (disturbance-wavenumber, pressure-sensitivity) plane, indicating loss of stability to pulsating perturbations for all sufficiently large disturbance wavelengths. It is thus concluded, based on characteristic parameter values, that an equally likely form of hydrodynamic instability in liquid-propellant combustion is of a non-steady, long-wave nature, distinct from the steady, cellular form originally predicted by Landau.
A numerical study of Coulomb interaction effects on 2D hopping transport.
Kinkhabwala, Yusuf A; Sverdlov, Viktor A; Likharev, Konstantin K
2006-02-15
We have extended our supercomputer-enabled Monte Carlo simulations of hopping transport in completely disordered 2D conductors to the case of substantial electron-electron Coulomb interaction. Such interaction may not only suppress the average value of hopping current, but also affect its fluctuations rather substantially. In particular, the spectral density S(I)(f) of current fluctuations exhibits, at sufficiently low frequencies, a 1/f-like increase which approximately follows the Hooge scaling, even at vanishing temperature. At higher f, there is a crossover to a broad range of frequencies in which S(I)(f) is nearly constant, hence allowing characterization of the current noise by the effective Fano factor [Formula: see text]. For sufficiently large conductor samples and low temperatures, the Fano factor is suppressed below the Schottky value (F = 1), scaling with the length L of the conductor as F = (L(c)/L)(α). The exponent α is significantly affected by the Coulomb interaction effects, changing from α = 0.76 ± 0.08 when such effects are negligible to virtually unity when they are substantial. The scaling parameter L(c), interpreted as the average percolation cluster length along the electric field direction, scales as [Formula: see text] when Coulomb interaction effects are negligible and [Formula: see text] when such effects are substantial, in good agreement with estimates based on the theory of directed percolation.
Feldberg, Stephen W
2010-06-15
For an outer-sphere heterogeneous electron transfer, Ox + e = Red, between an electrode and a redox couple, the Butler-Volmer formalism predicts that the operative heterogeneous rate constant, k(red) (cm s(-1)) for reduction (or k(ox) for oxidation) increases without limit as an exponential function of -alpha (E - E(0)) for reduction (or (1 - alpha)(E - E(0)) for oxidation), where E is the applied electrode potential, alpha (~1/2) is the transfer coefficient and E(0) is the formal potential. The Marcus-Hush formalism, as exposited by Chidsey (Chidsey, C. E. D. Science 1991, 215, 919), predicts that the value of k(red) or k(ox) limits at sufficiently large values of -(E - E(0)) or (E - E(0)). The steady-state currents at an inlaid disk electrode obtained for a redox species in solution were computed using both formalisms with the Oldham-Zoski approximation (Oldham, K. B.; Zoski, C. G. J. Electroanal. Chem. 1988, 256, 11). Significant differences are noted for the two formalisms. When k(0)r(0)/D is sufficiently small (k(0) is the standard rate constant, r(0) is the radius of the disk electrode, and D is the diffusion coefficient of the redox species), the Marcus-Hush formalism effects a limiting current that can be significantly smaller than the mass transport limited current. This is easily explained in terms of the limiting values of k(red) and k(ox) predicted by the Marcus-Hush formalism. The experimental conditions that must be met to effect significant differences in behavior are discussed; experimental conditions that effect virtually identical behavior are also discussed. As a caveat for experimentalists, applications of the Butler-Volmer formalism to systems that are more properly described using the Marcus-Hush formalism are shown to yield incorrect values of k(0) and meaningless values of alpha, which serves only as a fitting parameter.
Running Out of Time: Why Elephants Don't Gallop
NASA Astrophysics Data System (ADS)
Noble, Julian V.
2001-11-01
The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models.
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1) βk ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.
Relaxing the cosmological constant: a proof of concept
NASA Astrophysics Data System (ADS)
Alberte, Lasma; Creminelli, Paolo; Khmelnitsky, Andrei; Pirtskhalava, David; Trincherini, Enrico
2016-12-01
We propose a technically natural scenario whereby an initially large cosmological constant (c.c.) is relaxed down to the observed value due to the dynamics of a scalar evolving on a very shallow potential. The model crucially relies on a sector that violates the null energy condition (NEC) and gets activated only when the Hubble rate becomes sufficiently small — of the order of the present one. As a result of NEC violation, this low-energy universe evolves into inflation, followed by reheating and the standard Big Bang cosmology. The symmetries of the theory force the c.c. to be the same before and after the NEC-violating phase, so that a late-time observer sees an effective c.c. of the correct magnitude. Importantly, our model allows neither for eternal inflation nor for a set of possible values of dark energy, the latter fixed by the parameters of the theory.
NASA Astrophysics Data System (ADS)
Shepherd, D.; Burgess, D.; Jickells, T.; Andrews, J.; Cave, R.; Turner, R. K.; Aldridge, J.; Parker, E. R.; Young, E.
2007-07-01
A hydrodynamic model is developed for the Blackwater estuary (UK) and used to estimate nitrate removal by denitrification. Using the model, sediment analysis and estimates of sedimentation rates, we estimate changes in estuarine denitrification and intertidal carbon and nutrient storage and associated value of habitat created under a scenario of extensive managed realignment. We then use this information, together with engineering and land costs, to conduct a cost benefit analysis of the managed realignment. This demonstrates that over a 50-100 year timescale the value of the habitat created and carbon buried is sufficient to make the large scale managed realignment cost effective. The analysis reveals that carbon and nutrient storage plus habitat creation represent major and quantifiable benefits of realignment. The methodology described here can be readily transferred to other coastal systems.
Two New PRP Conjugate Gradient Algorithms for Minimization Optimization Models
Yuan, Gonglin; Duan, Xiabin; Liu, Wenjie; Wang, Xiaoliang; Cui, Zengru; Sheng, Zhou
2015-01-01
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations. PMID:26502409
Gene and translation initiation site prediction in metagenomic sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyatt, Philip Douglas; LoCascio, Philip F; Hauser, Loren John
2012-01-01
Gene prediction in metagenomic sequences remains a difficult problem. Current sequencing technologies do not achieve sufficient coverage to assemble the individual genomes in a typical sample; consequently, sequencing runs produce a large number of short sequences whose exact origin is unknown. Since these sequences are usually smaller than the average length of a gene, algorithms must make predictions based on very little data. We present MetaProdigal, a metagenomic version of the gene prediction program Prodigal, that can identify genes in short, anonymous coding sequences with a high degree of accuracy. The novel value of the method consists of enhanced translationmore » initiation site identification, ability to identify sequences that use alternate genetic codes and confidence values for each gene call. We compare the results of MetaProdigal with other methods and conclude with a discussion of future improvements.« less
Comment on "Universal relation between skewness and kurtosis in complex dynamics"
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2015-12-01
In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.
Marquart, Hans; Warren, Nicholas D; Laitinen, Juha; van Hemmen, Joop J
2006-07-01
Dermal exposure needs to be addressed in regulatory risk assessment of chemicals. The models used so far are based on very limited data. The EU project RISKOFDERM has gathered a large number of new measurements on dermal exposure to industrial chemicals in various work situations, together with information on possible determinants of exposure. These data and information, together with some non-RISKOFDERM data were used to derive default values for potential dermal exposure of the hands for so-called 'TGD exposure scenarios'. TGD exposure scenarios have similar values for some very important determinant(s) of dermal exposure, such as amount of substance used. They form narrower bands within the so-called 'RISKOFDERM scenarios', which cluster exposure situations according to the same purpose of use of the products. The RISKOFDERM scenarios in turn are narrower bands within the so-called Dermal Exposure Operation units (DEO units) that were defined in the RISKOFDERM project to cluster situations with similar exposure processes and exposure routes. Default values for both reasonable worst case situations and typical situations were derived, both for single datasets and, where possible, for combined datasets that fit the same TGD exposure scenario. The following reasonable worst case potential hand exposures were derived from combined datasets: (i) loading and filling of large containers (or mixers) with large amounts (many litres) of liquids: 11,500 mg per scenario (14 mg cm(-2) per scenario with surface of the hands assumed to be 820 cm(2)); (ii) careful mixing of small quantities (tens of grams in <1l): 4.1 mg per scenario (0.005 mg cm(-2) per scenario); (iii) spreading of (viscous) liquids with a comb on a large surface area: 130 mg per scenario (0.16 mg cm(-2) per scenario); (iv) brushing and rolling of (relatively viscous) liquid products on surfaces: 6500 mg per scenario (8 mg cm(-2) per scenario) and (v) spraying large amounts of liquids (paints, cleaning products) on large areas: 12,000 mg per scenario (14 mg cm(-2) per scenario). These default values are considered useful for estimating exposure for similar substances in similar situations with low uncertainty. Several other default values based on single datasets can also be used, but lead to estimates with a higher uncertainty, due to their more limited basis. Sufficient analogy in all described parameters of the scenario, including duration, is needed to enable proper use of the default values. The default values lead to similar estimates as the RISKOFDERM dermal exposure model that was based on the same datasets, but uses very different parameters. Both approaches are preferred over older general models, such as EASE, that are not based on data from actual dermal exposure situations.
Tuning the presence of dynamical phase transitions in a generalized XY spin chain.
Divakaran, Uma; Sharma, Shraddha; Dutta, Amit
2016-05-01
We study an integrable spin chain with three spin interactions and the staggered field (λ) while the latter is quenched either slowly [in a linear fashion in time (t) as t/τ, where t goes from a large negative value to a large positive value and τ is the inverse rate of quenching] or suddenly. In the process, the system crosses quantum critical points and gapless phases. We address the question whether there exist nonanalyticities [known as dynamical phase transitions (DPTs)] in the subsequent real-time evolution of the state (reached following the quench) governed by the final time-independent Hamiltonian. In the case of sufficiently slow quenching (when τ exceeds a critical value τ_{1}), we show that DPTs, of the form similar to those occurring for quenching across an isolated critical point, can occur even when the system is slowly driven across more than one critical point and gapless phases. More interestingly, in the anisotropic situation we show that DPTs can completely disappear for some values of the anisotropy term (γ) and τ, thereby establishing the existence of boundaries in the (γ-τ) plane between the DPT and no-DPT regions in both isotropic and anisotropic cases. Our study therefore leads to a unique situation when DPTs may not occur even when an integrable model is slowly ramped across a QCP. On the other hand, considering sudden quenches from an initial value λ_{i} to a final value λ_{f}, we show that the condition for the presence of DPTs is governed by relations involving λ_{i},λ_{f}, and γ, and the spin chain must be swept across λ=0 for DPTs to occur.
Security solutions: strategy and architecture
NASA Astrophysics Data System (ADS)
Seto, Myron W. L.
2002-04-01
Producers of banknotes, other documents of value and brand name goods are being presented constantly with new challenges due to the ever increasing sophistication of easily-accessible desktop publishing and color copying machines, which can be used for counterfeiting. Large crime syndicates have also shown that they have the means and the willingness to invest large sums of money to mimic security features. To ensure sufficient and appropriate protection, a coherent security strategy has to be put into place. The feature has to be appropriately geared to fight against the different types of attacks and attackers, and to have the right degree of sophistication or ease of authentication depending upon by whom or where a check is made. Furthermore, the degree of protection can be considerably increased by taking a multi-layered approach and using an open platform architecture. Features can be stratified to encompass overt, semi-covert, covert and forensic features.
Startup analysis for a high temperature gas loaded heat pipe
NASA Technical Reports Server (NTRS)
Sockol, P. M.
1973-01-01
A model for the rapid startup of a high-temperature gas-loaded heat pipe is presented. A two-dimensional diffusion analysis is used to determine the rate of energy transport by the vapor between the hot and cold zones of the pipe. The vapor transport rate is then incorporated in a simple thermal model of the startup of a radiation-cooled heat pipe. Numerical results for an argon-lithium system show that radial diffusion to the cold wall can produce large vapor flow rates during a rapid startup. The results also show that startup is not initiated until the vapor pressure p sub v in the hot zone reaches a precise value proportional to the initial gas pressure p sub i. Through proper choice of p sub i, startup can be delayed until p sub v is large enough to support a heat-transfer rate sufficient to overcome a thermal load on the heat pipe.
Fixation of slightly beneficial mutations: effects of life history.
Vindenes, Yngvild; Lee, Aline Magdalena; Engen, Steinar; Saether, Bernt-Erik
2010-04-01
Recent studies of rates of evolution have revealed large systematic differences among organisms with different life histories, both within and among taxa. Here, we consider how life history may affect the rate of evolution via its influence on the fixation probability of slightly beneficial mutations. Our approach is based on diffusion modeling for a finite, stage-structured population with stochastic population dynamics. The results, which are verified by computer simulations, demonstrate that even with complex population structure just two demographic parameters are sufficient to give an accurate approximation of the fixation probability of a slightly beneficial mutation. These are the reproductive value of the stage in which the mutation first occurs and the demographic variance of the population. The demographic variance also determines what influence population size has on the fixation probability. This model represents a substantial generalization of earlier models, covering a large range of life histories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, P.; Powers, W.; Hritzay, D.
1959-06-01
The development of an arc wind tunnel capable of stagnation pressures in the excess of twenty atmospheres and using as much as fifteen megawatts of electrical power is described. The calibration of this facility shows that it is capable of reproducing the aerodynamic environment encountered by vehicles flying at velocities as great as satellite velocity. Its use as a missile re-entry material test facility is described. The large power capacity of this facility allows one to make material tests on specimens of size sufficient to be useful for material development yet at realistic energy and Reynolds number values. By themore » addition of a high-capacity vacuum system, this facility can be used to produce the low density, high Mach number environment needed for simulating satellite re-entry, as well as hypersonic flight at extreme altitudes. (auth)« less
Quantifying the uncertainty in heritability
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-01-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270
Hough, S.E.; Avni, R.
2009-01-01
In combination with the historical record, paleoseismic investigations have provided a record of large earthquakes in the Dead Sea Rift that extends back over 1500 years. Analysis of macroseismic effects can help refine magnitude estimates for large historical events. In this study we consider the detailed intensity distributions for two large events, in 1170 CE and 1202 CE, as determined from careful reinterpretation of available historical accounts, using the 1927 Jericho earthquake as a guide in their interpretation. In the absence of an intensity attenuation relationship for the Dead Sea region, we use the 1927 Jericho earthquake to develop a preliminary relationship based on a modification of the relationships developed in other regions. Using this relation, we estimate M7.6 for the 1202 earthquake and M6.6 for the 1170 earthquake. The uncertainties for both estimates are large and difficult to quantify with precision. The large uncertainties illustrate the critical need to develop a regional intensity attenuation relation. We further consider the distribution of magnitudes in the historic record and show that it is consistent with a b-value distribution with a b-value of 1. Considering the entire Dead Sea Rift zone, we show that the seismic moment release rate over the past 1500 years is sufficient, within the uncertainties of the data, to account for the plate tectonic strain rate along the plate boundary. The results reveal that an earthquake of M7.8 is expected within the zone on average every 1000 years. ?? 2011 Science From Israel/LPPLtd.
Normative values of isometric elbow strength in healthy adults: a systematic review.
Kotte, Shamala H P; Viveen, Jetske; Koenraadt, Koen L M; The, Bertram; Eygendaal, Denise
2018-07-01
Post-traumatic deformities such as biceps tendon rupture or (peri-)articular fractures of the elbow are often related to a decrease in muscle strength. Postoperative evaluation of these deformities requires normative values of elbow strength. The purpose of this systematic review was to determine these normative values of isometric elbow strength in healthy adults resulting from studies evaluating this strength (i.e. flexion, extension, pronation and supination strength). The databases of PubMed, EMBASE and Web of Sciences were searched and screened for studies involving the isometric elbow strength as measured in asymptomatic volunteers. The quality of the studies was assessed and studies of low quality were excluded. Nineteen studies met the inclusion criteria and were of sufficiently high quality to be included in the present review. In these studies, elbow strength was measured in a total of 1880 healthy volunteers. The experimental set-up and devices used to measure elbow strength varied between studies. Using some assumptions, a normative values table was assembled. Large standard deviations of normative values in combination with different measurement devices used, as well as the different measurement positions of the subjects, demonstrated that there is no consensus about measuring the isometric elbow strength and therefore the normative values have to be interpreted with caution.
Lasers with intra-cavity phase elements
NASA Astrophysics Data System (ADS)
Gulses, A. Alkan; Kurtz, Russell; Islas, Gabriel; Anisimov, Igor
2018-02-01
Conventional laser resonators yield multimodal output, especially at high powers and short cavity lengths. Since highorder modes exhibit large divergence, it is desirable to suppress them to improve laser quality. Traditionally, such modal discriminations can be achieved by simple apertures that provide absorptive loss for large diameter modes, while allowing the lower orders, such as the fundamental Gaussian, to pass through. However, modal discrimination may not be sufficient for short-cavity lasers, resulting in multimodal operation as well as power loss and overheating in the absorptive part of the aperture. In research to improve laser mode control with minimal energy loss, systematic experiments have been executed using phase-only elements. These were composed of an intra-cavity step function and a diffractive out-coupler made of a computer-generated hologram. The platform was a 15-cm long solid-state laser that employs a neodymium-doped yttrium orthovanadate crystal rod, producing 1064 nm multimodal laser output. The intra-cavity phase elements (PEs) were shown to be highly effective in obtaining beams with reduced M-squared values and increased output powers, yielding improved values of radiance. The utilization of more sophisticated diffractive elements is promising for more difficult laser systems.
Harnessing the Bethe free energy†
Bapst, Victor
2016-01-01
ABSTRACT A wide class of problems in combinatorics, computer science and physics can be described along the following lines. There are a large number of variables ranging over a finite domain that interact through constraints that each bind a few variables and either encourage or discourage certain value combinations. Examples include the k‐SAT problem or the Ising model. Such models naturally induce a Gibbs measure on the set of assignments, which is characterised by its partition function. The present paper deals with the partition function of problems where the interactions between variables and constraints are induced by a sparse random (hyper)graph. According to physics predictions, a generic recipe called the “replica symmetric cavity method” yields the correct value of the partition function if the underlying model enjoys certain properties [Krzkala et al., PNAS (2007) 10318–10323]. Guided by this conjecture, we prove general sufficient conditions for the success of the cavity method. The proofs are based on a “regularity lemma” for probability measures on sets of the form Ωn for a finite Ω and a large n that may be of independent interest. © 2016 Wiley Periodicals, Inc. Random Struct. Alg., 49, 694–741, 2016 PMID:28035178
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
Improving Butanol Fermentation To Enter the Advanced Biofuel Market
Tracy, Bryan P.
2012-01-01
ABSTRACT 1-Butanol is a large-volume, intermediate chemical with favorable physical and chemical properties for blending with or directly substituting for gasoline. The per-volume value of butanol, as a chemical, is sufficient for investing into the recommercialization of the classical acetone-butanol-ethanol (ABE) (E. M. Green, Curr. Opin. Biotechnol. 22:337–343, 2011) fermentation process. Furthermore, with modest improvements in three areas of the ABE process, operating costs can be sufficiently decreased to make butanol an economically viable advanced biofuel. The three areas of greatest interest are (i) maximizing yields of butanol on any particular substrate, (ii) expanding substrate utilization capabilities of the host microorganism, and (iii) reducing the energy consumption of the overall production process, in particular the separation and purification operations. In their study in the September/October 2012 issue of mBio, Jang et al. [mBio 3(5):e00314-12, 2012] describe a comprehensive study on driving glucose metabolism in Clostridium acetobutylicum to the production of butanol. Moreover, they execute a metabolic engineering strategy to achieve the highest yet reported yields of butanol on glucose. PMID:23232720
Neighborhood scale quantification of ecosystem goods and ...
Ecosystem goods and services are those ecological structures and functions that humans can directly relate to their state of well-being. Ecosystem goods and services include, but are not limited to, a sufficient fresh water supply, fertile lands to produce agricultural products, shading, air and water of sufficient quality for designated uses, flood water retention, and places to recreate. The US Environmental Protection Agency (USEPA) Office of Research and Development’s Tampa Bay Ecosystem Services Demonstration Project (TBESDP) modeling efforts organized existing literature values for biophysical attributes and processes related to EGS. The goal was to develop a database for informing mapped-based EGS assessments for current and future land cover/use scenarios at multiple scales. This report serves as a demonstration of applying an EGS assessment approach at the large neighborhood scale (~1,000 acres of residential parcels plus common areas). Here, we present mapped inventories of ecosystem goods and services production at a neighborhood scale within the Tampa Bay, FL region. Comparisons of the inventory between two alternative neighborhood designs are presented as an example of how one might apply EGS concepts at this scale.
Detecting Thermal Cloaks via Transient Effects
Sklan, Sophia R.; Bai, Xue; Li, Baowen; Zhang, Xiang
2016-01-01
Recent research on the development of a thermal cloak has concentrated on engineering an inhomogeneous thermal conductivity and an approximate, homogeneous volumetric heat capacity. While the perfect cloak of inhomogeneous κ and inhomogeneous ρcp is known to be exact (no signals scattering and only mean values penetrating to the cloak’s interior), the sensitivity of diffusive cloaks to defects and approximations has not been analyzed. We analytically demonstrate that these approximate cloaks are detectable. Although they work as perfect cloaks in the steady-state, their transient (time-dependent) response is imperfect and a small amount of heat is scattered. This is sufficient to determine the presence of a cloak and any heat source it contains, but the material composition hidden within the cloak is not detectable in practice. To demonstrate the feasibility of this technique, we constructed a cloak with similar approximation and directly detected its presence using these transient temperature deviations outside the cloak. Due to limitations in the range of experimentally accessible volumetric specific heats, our detection scheme should allow us to find any realizable cloak, assuming a sufficiently large temperature difference. PMID:27605153
Heimes, F.J.; Moore, G.K.; Steele, T.D.
1978-01-01
Expanded energy- and recreation-related activities in the Yampa River basin, Colorado and Wyoming, have caused a rapid increase in economic development which will result in increased demand and competition for natural resources. In planning for efficient allocation of the basin 's natural resources, Landsat images and small-scale color and color-infrared photographs were used for selected geologic, hydrologic and land-use applications within the Yampa River basin. Applications of Landsat data included: (1) regional land-use classification and mapping, (2) lineament mapping, and (3) areal snow-cover mapping. Results from the Landsat investigations indicated that: (1) Landsat land-use classification maps, at a regional level, compared favorably with areal land-use patterns that were defined from available ground information, (2) lineaments were mapped in sufficient detail using recently developed techniques for interpreting aerial photographs, (3) snow cover generally could be mapped for large areas with the exception of some densely forested areas of the basin and areas having a large percentage of winter-season cloud cover. Aerial photographs were used for estimation of turbidity for eight stream locations in the basin. Spectral reflectance values obtained by digitizing photographs were compared with measured turbidity values. Results showed strong correlations (variances explained of greater than 90 percent) between spectral reflectance obtained from color photographs and measured turbidity values. (Woodard-USGS)
Design and Construction of an Urban Runoff Research Facility
Wherley, Benjamin G.; White, Richard H.; McInnes, Kevin J.; Fontanier, Charles H.; Thomas, James C.; Aitkenhead-Peterson, Jacqueline A.; Kelly, Steven T.
2014-01-01
As the urban population increases, so does the area of irrigated urban landscape. Summer water use in urban areas can be 2-3x winter base line water use due to increased demand for landscape irrigation. Improper irrigation practices and large rainfall events can result in runoff from urban landscapes which has potential to carry nutrients and sediments into local streams and lakes where they may contribute to eutrophication. A 1,000 m2 facility was constructed which consists of 24 individual 33.6 m2 field plots, each equipped for measuring total runoff volumes with time and collection of runoff subsamples at selected intervals for quantification of chemical constituents in the runoff water from simulated urban landscapes. Runoff volumes from the first and second trials had coefficient of variability (CV) values of 38.2 and 28.7%, respectively. CV values for runoff pH, EC, and Na concentration for both trials were all under 10%. Concentrations of DOC, TDN, DON, PO4-P, K+, Mg2+, and Ca2+ had CV values less than 50% in both trials. Overall, the results of testing performed after sod installation at the facility indicated good uniformity between plots for runoff volumes and chemical constituents. The large plot size is sufficient to include much of the natural variability and therefore provides better simulation of urban landscape ecosystems. PMID:25146420
The multiple facets of Peto's paradox: a life-history model for the evolution of cancer suppression.
Brown, Joel S; Cunningham, Jessica J; Gatenby, Robert A
2015-07-19
Large animals should have higher lifetime probabilities of cancer than small animals because each cell division carries an attendant risk of mutating towards a tumour lineage. However, this is not observed--a (Peto's) paradox that suggests large and/or long-lived species have evolved effective cancer suppression mechanisms. Using the Euler-Lotka population model, we demonstrate the evolutionary value of cancer suppression as determined by the 'cost' (decreased fecundity) of suppression verses the 'cost' of cancer (reduced survivorship). Body size per se will not select for sufficient cancer suppression to explain the paradox. Rather, cancer suppression should be most extreme when the probability of non-cancer death decreases with age (e.g. alligators), maturation is delayed, fecundity rates are low and fecundity increases with age. Thus, the value of cancer suppression is predicted to be lowest in the vole (short lifespan, high fecundity) and highest in the naked mole rat (long lived with late female sexual maturity). The life history of pre-industrial humans likely selected for quite low levels of cancer suppression. In modern humans that live much longer, this level results in unusually high lifetime cancer risks. The model predicts a lifetime risk of 49% compared with the current empirical value of 43%. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Wan, Ling; Wang, Tao
2017-06-01
We consider the Navier-Stokes equations for compressible heat-conducting ideal polytropic gases in a bounded annular domain when the viscosity and thermal conductivity coefficients are general smooth functions of temperature. A global-in-time, spherically or cylindrically symmetric, classical solution to the initial boundary value problem is shown to exist uniquely and converge exponentially to the constant state as the time tends to infinity under certain assumptions on the initial data and the adiabatic exponent γ. The initial data can be large if γ is sufficiently close to 1. These results are of Nishida-Smoller type and extend the work (Liu et al. (2014) [16]) restricted to the one-dimensional flows.
Interior volume of (1 + D)-dimensional Schwarzschild black hole
NASA Astrophysics Data System (ADS)
Bhaumik, Nilanjandev; Majhi, Bibhas Ranjan
2018-01-01
We calculate the maximum interior volume, enclosed by the event horizon, of a (1 + D)-dimensional Schwarzschild black hole. Taking into account the mass change due to Hawking radiation, we show that the volume increases towards the end of the evaporation. This fact is not new as it has been observed earlier for four-dimensional case. The interesting point we observe is that this increase rate decreases towards the higher value of space dimensions D; i.e. it is a decelerated expansion of volume with the increase of spatial dimensions. This implies that for a sufficiently large D, the maximum interior volume does not change. The possible implications of these results are also discussed.
Pierre, Th
2013-01-01
In a new toroidal laboratory plasma device including a poloidal magnetic field created by an internal circular conductor, the confinement efficiency of the magnetized plasma and the turbulence level are studied in different situations. The plasma density is greatly enhanced when a sufficiently large poloidal magnetic field is established. Moreover, the instabilities and the turbulence usually found in toroidal devices without sheared magnetic field lines are suppressed by the finite rotational transform. The particle confinement time is estimated from the measurement of the plasma decay time. It is compared to the Bohm diffusion time and to the value predicted by different diffusion models, in particular neoclassical diffusion involving trapped particles.
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.; Evans, G. C.; Evans, S. T.; Alexander, W. M.; Burton, W. M.; Firth, J. G.; Bussoletti, E.; Grard, R. J. L.; Hanner, M. S.; Sekanina, Z.
1987-01-01
Analyses are presented of Giotto's Dust Impact Detection System experiment measurements of dust grains incident on the Giotto dust shield along its trajectory through the coma of comet P/Halley on March 13 and 14, 1986. Ground-based CCD imagery of the inner coma dust continuum at the time of the encounter are used to derive the area of grains intercepted by Giotto. Data obtained at large masses show clear evidence of a decrease in the mass distribution index at these masses within the coma; it is shown that such a value of the mass index can furnish sufficient mass for consistency with an observed deceleration.
A study of fracture phenomena in fiber composite laminates. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Konish, H. J., Jr.
1973-01-01
The extension of linear elastic fracture mechanics from ostensibly homogeneous isotropic metallic alloys to heterogeneous anisotropic advanced fiber composites is considered. It is analytically demonstrated that the effects of material anisotropy do not alter the principal characteristics exhibited by a crack in an isotropic material. The heterogeneity of fiber composites is experimentally shown to have a negligible effect on the behavior of a sufficiently long crack. A method is proposed for predicting the fracture strengths of a large class of composite laminates; the values predicted by this method show good agreement with limited experimental data. The limits imposed by material heterogeneity are briefly discussed, and areas for further study are recommended.
A Conceptual Approach to Absolute Value Equations and Inequalities
ERIC Educational Resources Information Center
Ellis, Mark W.; Bryson, Janet L.
2011-01-01
The absolute value learning objective in high school mathematics requires students to solve far more complex absolute value equations and inequalities. When absolute value problems become more complex, students often do not have sufficient conceptual understanding to make any sense of what is happening mathematically. The authors suggest that the…
A study of atmospheric aerosol optical properties over Alexandria city- Egypt
NASA Astrophysics Data System (ADS)
E Kohil, E.; Saleh, I. H.; Ghatass, Z. F.
2017-02-01
Aerosols are minute particles suspended in the atmosphere. When these particles are sufficiently large, we notice their presence as they scatter and absorb sunlight. They scatter and absorb optical radiation depending upon their size distribution, refractive index and total atmospheric loading. Aerosol optical depth (AOD) was measured at Alexandria city (31° 16‧ N, 30° 01‧ E and 21 m above sea level) using hand-held microprocessor-based sun photometer “MICROTOPS II”. AOD is studied at five different wavelengths from 380 to 1020 nm during the period from Aug-2015 to Feb-2016. Precipitable water column (PWC) is estimated from the measurements of solar intensity at 936 and 1020 nm. Diurnal, monthly and seasonal variation of AOD and water vapor content was studied during the study period. The seasonal variation of AOD has high value (0.416) in summer and low value (0.176) in winter at wavelength of 380 nm. The changes in the PWC have been found to be correlated with changes in AOD. This is supported by the observed increase of AOD with relative humidity (RH) values.
Values Education: Why the Teaching of Values in Schools Is Necessary, but Not Sufficient
ERIC Educational Resources Information Center
Etherington, Matthew
2013-01-01
In recent years, a growing demand by educators, governments, and the community for the teaching of values in public schools has led to the implementation of values education. As acknowledged by the 2010 Living Skills Values Education Program, values education is an essential part of schooling. In the public school system, there have been attempts…
Behavioral Interventions to Advance Self-Sufficiency
ERIC Educational Resources Information Center
MDRC, 2017
2017-01-01
As the first major effort to use a behavioral economics lens to examine human services programs that serve poor and vulnerable families in the United States, the Behavioral Interventions to Advance Self-Sufficiency (BIAS) project demonstrated the value of applying behavioral insights to improve the efficacy of human services programs. The BIAS…
Hannah, J.L.; Stein, H.J.
1986-01-01
Quartz phenocrysts from 31 granitoid stocks in the Colorado Mineral Belt yield ??18O values less than 10.4???, with most values between 9.3 and 10.4???. An average magmatic value of about 8.5??? is suggested. The stocks resemble A-type granites; these data support magma genesis by partial melting of previously depleted, fluorine-enriched, lower crustal granulites, followed by extreme differentiation and volatile evolution in the upper crust. Subsolidus interaction of isotopically light water with stocks has reduced most feldspar and whole rock ??18O values. Unaltered samples from Climax-type molybdenumbearing granites, however, show no greater isotopic disturbance than samples from unmineralized stocks. Although meteoric water certainly played a role in post-mineralization alteration, particularly in feldspars, it is not required during high-temperature mineralization processes. We suggest that slightly low ??18O values in some vein and replacement minerals associated with molybdenum mineralization may have resulted from equilibration with isotopically light magmatic water and/or heavy isotope depletion of the ore fluid by precipitation of earlier phases. Accumulation of sufficient quantities of isotopically light magmatic water to produce measured depletions of 18O requires extreme chemical stratification in a large magma reservoir. Upward migration of a highly fractionated, volatile-rich magma into a small apical Climax-type diapir, including large scale transport of silica, alkalis, molybdenum, and other vapor soluble elements, may occur with depression of the solidus temperature and reduction of magma viscosity by fluorine. Climax-type granites may provide examples of 18O depletion in magmatic systems without meteoric water influx. ?? 1986 Springer-Verlag.
Steady finite-Reynolds-number flows in three-dimensional collapsible tubes
NASA Astrophysics Data System (ADS)
Hazel, Andrew L.; Heil, Matthias
2003-07-01
A fully coupled finite-element method is used to investigate the steady flow of a viscous fluid through a thin-walled elastic tube mounted between two rigid tubes. The steady three-dimensional Navier Stokes equations are solved simultaneously with the equations of geometrically nonlinear Kirchhoff Love shell theory. If the transmural (internal minus external) pressure acting on the tube is sufficiently negative then the tube buckles non-axisymmetrically and the subsequent large deformations lead to a strong interaction between the fluid and solid mechanics. The main effect of fluid inertia on the macroscopic behaviour of the system is due to the Bernoulli effect, which induces an additional local pressure drop when the tube buckles and its cross-sectional area is reduced. Thus, the tube collapses more strongly than it would in the absence of fluid inertia. Typical tube shapes and flow fields are presented. In strongly collapsed tubes, at finite values of the Reynolds number, two ’jets‘ develop downstream of the region of strongest collapse and persist for considerable axial distances. For sufficiently high values of the Reynolds number, these jets impact upon the sidewalls and spread azimuthally. The consequent azimuthal transport of momentum dramatically changes the axial velocity profiles, which become approximately uTheta-shaped when the flow enters the rigid downstream pipe. Further convection of momentum causes the development of a ring-shaped velocity profile before the ultimate return to a parabolic profile far downstream.
Oblique nonlinear whistler wave
NASA Astrophysics Data System (ADS)
Yoon, Peter H.; Pandey, Vinay S.; Lee, Dong-Hun
2014-03-01
Motivated by satellite observation of large-amplitude whistler waves propagating in oblique directions with respect to the ambient magnetic field, a recent letter discusses the physics of large-amplitude whistler waves and relativistic electron acceleration. One of the conclusions of that letter is that oblique whistler waves will eventually undergo nonlinear steepening regardless of the amplitude. The present paper reexamines this claim and finds that the steepening associated with the density perturbation almost never occurs, unless whistler waves have sufficiently high amplitude and propagate sufficiently close to the resonance cone angle.
Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ
NASA Astrophysics Data System (ADS)
Ramage, M. R.
2005-08-01
We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.
Close the High Seas to Fishing?
White, Crow; Costello, Christopher
2014-01-01
The world's oceans are governed as a system of over 150 sovereign exclusive economic zones (EEZs, ∼42% of the ocean) and one large high seas (HS) commons (∼58% of ocean) with essentially open access. Many high-valued fish species such as tuna, billfish, and shark migrate around these large oceanic regions, which as a consequence of competition across EEZs and a global race-to-fish on the HS, have been over-exploited and now return far less than their economic potential. We address this global challenge by analyzing with a spatial bioeconomic model the effects of completely closing the HS to fishing. This policy both induces cooperation among countries in the exploitation of migratory stocks and provides a refuge sufficiently large to recover and maintain these stocks at levels close to those that would maximize fisheries returns. We find that completely closing the HS to fishing would simultaneously give rise to large gains in fisheries profit (>100%), fisheries yields (>30%), and fish stock conservation (>150%). We also find that changing EEZ size may benefit some fisheries; nonetheless, a complete closure of the HS still returns larger fishery and conservation outcomes than does a HS open to fishing. PMID:24667759
Conformal window 2.0: The large Nf safe story
NASA Astrophysics Data System (ADS)
Antipin, Oleg; Sannino, Francesco
2018-06-01
We extend the phase diagram of SU(N) gauge-fermion theories as a function of the number of flavors and colors to the region in which asymptotic freedom is lost. We argue, using large Nf results, for the existence of an ultraviolet interacting fixed point at a sufficiently large number of flavors opening up to a second ultraviolet conformal window in the number of flavors vs colors phase diagram. We first review the state-of-the-art for the large Nf beta function and then estimate the lower boundary of the ultraviolet window. The theories belonging to this new region are examples of safe non-Abelian quantum electrodynamics, termed here safe QCD. Therefore, according to Wilson, they are fundamental. An important critical quantity is the fermion mass anomalous dimension at the ultraviolet fixed point that we determine at leading order in 1 /Nf . We discover that its value is comfortably below the bootstrap bound. We also investigate the Abelian case and find that at the potential ultraviolet fixed point the related fermion mass anomalous dimension has a singular behavior suggesting that a more careful investigation of its ultimate fate is needed.
Free Surface Wave Interaction with a Horizontal Cylinder
NASA Astrophysics Data System (ADS)
Oshkai, P.; Rockwell, D.
1999-10-01
Classes of vortex formation from a horizontal cylinder adjacent to an undulating free-surface wave are characterized using high-image-density particle image velocimetry. Instantaneous representations of the velocity field, streamline topology and vorticity patterns yield insight into the origin of unsteady loading of the cylinder. For sufficiently deep submergence of the cylinder, the orbital nature of the wave motion results in multiple sites of vortex development, i.e., onset of vorticity concentrations, along the surface of the cylinder, followed by distinctive types of shedding from the cylinder. All of these concentrations of vorticity then exhibit orbital motion about the cylinder. Their contributions to the instantaneous values of the force coefficients are assessed by calculating moments of vorticity. It is shown that large contributions to the moments and their rate of change with time can occur for those vorticity concentrations having relatively small amplitude orbital trajectories. In a limiting case, collision with the surface of the cylinder can occur. Such vortex-cylinder interactions exhibit abrupt changes in the streamline topology during the wave cycle, including abrupt switching of the location of saddle points in the wave. The effect of nominal depth of submergence of the cylinder is characterized in terms of the time history of patterns of vorticity generated from the cylinder and the free surface. Generally speaking, generic types of vorticity concentrations are formed from the cylinder during the cycle of the wave motion for all values of submergence. The proximity of the free surface, however, can exert a remarkable influence on the initial formation, the eventual strength, and the subsequent motion of concentrations of vorticity. For sufficiently shallow submergence, large-scale vortex formation from the upper surface of the cylinder is inhibited and, in contrast, that from the lower surface of the cylinder is intensified. Moreover, decreasing the depth of submergence retards the orbital migration of previously shed concentrations of vorticity about the cylinder.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
An Algorithm for the Calculation of Exact Term Discrimination Values.
ERIC Educational Resources Information Center
Willett, Peter
1985-01-01
Reports algorithm for calculation of term discrimination values that is sufficiently fast in operation to permit use of exact values. Evidence is presented to show that relationship between term discrimination and term frequency is crucially dependent upon type of inter-document similarity measure used for calculation of discrimination values. (13…
ERIC Educational Resources Information Center
McNamee, Mike
1990-01-01
Charities have an obligation to give donors "accurate and sufficient information concerning the deductibility of contributions." Donors must subtract any benefit of "substantial value" from their gifts. The value of a benefit is based on its fair market value, not on its cost to the charity. (MLW)
Measuring Teaching Quality and Student Engagement in South Korea and The Netherlands
ERIC Educational Resources Information Center
van de Grift, Wim J. C. M.; Chun, Seyeoung; Maulana, Ridwan; Lee, Okhwa; Helms-Lorenz, Michelle
2017-01-01
Six observation scales for measuring the skills of teachers and 1 scale for measuring student engagement, assessed in South Korea and The Netherlands, are sufficiently reliable and offer sufficient predictive value for student engagement. A multigroup confirmatory factor analysis shows that the factor loadings and intercepts of the scales are the…
15 CFR 2007.8 - Other reviews of article eligibilities.
Code of Federal Regulations, 2010 CFR
2010-01-01
... “sufficiently competitive” to warrant a reduced competitive need limit. Those articles determined to be “sufficiently competitive” will be subject to a new lower competitive need limit set at 25 percent of the value... articles will continue to be subject to the original competitive need limits of 50 percent or $25 million...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hosking, Jonathan R. M.; Natarajan, Ramesh
The computer creates a utility demand forecast model for weather parameters by receiving a plurality of utility parameter values, wherein each received utility parameter value corresponds to a weather parameter value. Determining that a range of weather parameter values lacks a sufficient amount of corresponding received utility parameter values. Determining one or more utility parameter values that corresponds to the range of weather parameter values. Creating a model which correlates the received and the determined utility parameter values with the corresponding weather parameters values.
7 CFR 766.113 - Buyout of loan at current market value.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Conservation Contract, if requested; (5) The present value of the restructured loans is less than the net... have non-essential assets for which the net recovery value is sufficient to pay the account current; (4... 7 Agriculture 7 2011-01-01 2011-01-01 false Buyout of loan at current market value. 766.113...
7 CFR 766.113 - Buyout of loan at current market value.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Conservation Contract, if requested; (5) The present value of the restructured loans is less than the net... have non-essential assets for which the net recovery value is sufficient to pay the account current; (4... 7 Agriculture 7 2014-01-01 2014-01-01 false Buyout of loan at current market value. 766.113...
7 CFR 766.113 - Buyout of loan at current market value.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Conservation Contract, if requested; (5) The present value of the restructured loans is less than the net... have non-essential assets for which the net recovery value is sufficient to pay the account current; (4... 7 Agriculture 7 2013-01-01 2013-01-01 false Buyout of loan at current market value. 766.113...
7 CFR 766.113 - Buyout of loan at current market value.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Conservation Contract, if requested; (5) The present value of the restructured loans is less than the net... have non-essential assets for which the net recovery value is sufficient to pay the account current; (4... 7 Agriculture 7 2012-01-01 2012-01-01 false Buyout of loan at current market value. 766.113...
Electrohydrodynamically driven large-area liquid ion sources
Pregenzer, Arian L.
1988-01-01
A large-area liquid ion source comprises means for generating, over a large area of the surface of a liquid, an electric field of a strength sufficient to induce emission of ions from a large area of said liquid. Large areas in this context are those distinct from emitting areas in unidimensional emitters.
An Integrative Account of Constraints on Cross-Situational Learning
Yurovsky, Daniel; Frank, Michael C.
2015-01-01
Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Analyzing capture zone distributions (CZD) in growth: Theory and applications
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.; Pimpinelli, Alberto; Luis González, Diego
2014-09-01
We have argued that the capture-zone distribution (CZD) in submonolayer growth can be well described by the generalized Wigner distribution (GWD) P(s) =asβ exp(-bs2), where s is the CZ area divided by its average value. This approach offers arguably the most robust (least sensitive to mass transport) method to find the critical nucleus size i, since β ≈ i + 2. Various analytical and numerical investigations, which we discuss, show that the simple GWD expression is inadequate in the tails of the distribution, it does account well for the central regime 0.5 < s < 2, where the data is sufficiently large to be reliably accessible experimentally. We summarize and catalog the many experiments in which this method has been applied.
Deformation of fluctuating chiral ribbons
NASA Astrophysics Data System (ADS)
Panyukov, Sergey
2003-03-01
We find analytical solution of the model of a fluctuating filament with a spontaneously twisted noncircular cross section in the presence of external force and torque. We show that when such ribbon is subjected to a sufficiently strong extensional force, it exhibits an asymmetric response to large degrees of overwinding and unwinding. We construct the stability diagram that describes the buckling transition of such ribbons under the opposing action of force and torque and show that all the observed behaviors can be understood in terms of continuous transformations between straight and spiral states of the ribbon. The relation between our results and experimental observations on DNA is discussed and a new reentrant spiral to rod transition is predicted at intermediate values of twist rigidity and applied force.
Knot probability of polygons subjected to a force: a Monte Carlo study
NASA Astrophysics Data System (ADS)
Janse van Rensburg, E. J.; Orlandini, E.; Tesi, M. C.; Whittington, S. G.
2008-01-01
We use Monte Carlo methods to study the knot probability of lattice polygons on the cubic lattice in the presence of an external force f. The force is coupled to the span of the polygons along a lattice direction, say the z-direction. If the force is negative polygons are squeezed (the compressive regime), while positive forces tend to stretch the polygons along the z-direction (the tensile regime). For sufficiently large positive forces we verify that the Pincus scaling law in the force-extension curve holds. At a fixed number of edges n the knot probability is a decreasing function of the force. For a fixed force the knot probability approaches unity as 1 - exp(-α0(f)n + o(n)), where α0(f) is positive and a decreasing function of f. We also examine the average of the absolute value of the writhe and we verify the square root growth law (known for f = 0) for all values of f.
Geological implications of a permeability-depth curve for the continental crust
Ingebritsen, S.E.; Manning, C.E.
1999-01-01
The decrease in permeability (k) of the continental crust with depth (z), as constrained by geothermal data and calculated fluid flux during metamorphism, is given by log k = -14 - 3.2 log z, where A is in meters squared and z is in kilometers. At moderate to great crustal depths (>???5 km), this curve is defined mainly by data from prograde metamorphic systems, and is thus applicable to orogenic belts where the crust is being thickened and/or heated; lower permeabilities may occur in stable cratonic regions. This k-z relation implies that typical metamorphic fluid flux values of ???10-11 m/s are consistent with fluid pressures significantly above hydrostatic values. The k-z curve also predicts that metamorphic CO2 flux from large orogens may be sufficient to cause significant climatic effects, if retrograde carbonation reactions are minimal, and suggests a significant capacity for diffuse degassing of Earth (1015-1016 g/yr) in tectonically active regions.
Ardham, Vikram Reddy; Leroy, Frédéric
2017-10-21
Coarse-grained models have increasingly been used in large-scale particle-based simulations. However, due to their lack of degrees of freedom, it is a priori unlikely that they straightforwardly represent thermal properties with the same accuracy as their atomistic counterparts. We take a first step in addressing the impact of liquid coarse-graining on interfacial heat conduction by showing that an atomistic and a coarse-grained model of water may yield similar values of the Kapitza conductance on few-layer graphene with interactions ranging from hydrophobic to mildly hydrophilic. By design the water models employed yield similar liquid layer structures on the graphene surfaces. Moreover, they share common vibration properties close to the surfaces and thus couple with the vibrations of graphene in a similar way. These common properties explain why they yield similar Kapitza conductance values despite their bulk thermal conductivity differing by more than a factor of two.
Holography and thermalization in optical pump-probe spectroscopy
NASA Astrophysics Data System (ADS)
Bagrov, A.; Craps, B.; Galli, F.; Keränen, V.; Keski-Vakkuri, E.; Zaanen, J.
2018-04-01
Using holography, we model experiments in which a 2 +1 D strange metal is pumped by a laser pulse into a highly excited state, after which the time evolution of the optical conductivity is probed. We consider a finite-density state with mildly broken translation invariance and excite it by oscillating electric field pulses. At zero density, the optical conductivity would assume its thermalized value immediately after the pumping has ended. At finite density, pulses with significant dc components give rise to slow exponential relaxation, governed by a vector quasinormal mode. In contrast, for high-frequency pulses the amplitude of the quasinormal mode is strongly suppressed, so that the optical conductivity assumes its thermalized value effectively instantaneously. This surprising prediction may provide a stimulus for taking up the challenge to realize these experiments in the laboratory. Such experiments would test a crucial open question faced by applied holography: are its predictions artifacts of the large N limit or do they enjoy sufficient UV independence to hold at least qualitatively in real-world systems?
Alternative Attitude Commanding and Control for Precise Spacecraft Landing
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2004-01-01
A report proposes an alternative method of control for precision landing on a remote planet. In the traditional method, the attitude of a spacecraft is required to track a commanded translational acceleration vector, which is generated at each time step by solving a two-point boundary value problem. No requirement of continuity is imposed on the acceleration. The translational acceleration does not necessarily vary smoothly. Tracking of a non-smooth acceleration causes the vehicle attitude to exhibit undesirable transients and poor pointing stability behavior. In the alternative method, the two-point boundary value problem is not solved at each time step. A smooth reference position profile is computed. The profile is recomputed only when the control errors get sufficiently large. The nominal attitude is still required to track the smooth reference acceleration command. A steering logic is proposed that controls the position and velocity errors about the reference profile by perturbing the attitude slightly about the nominal attitude. The overall pointing behavior is therefore smooth, greatly reducing the degree of pointing instability.
Giant current fluctuations in an overheated single-electron transistor
NASA Astrophysics Data System (ADS)
Laakso, M. A.; Heikkilä, T. T.; Nazarov, Yuli V.
2010-11-01
Interplay of cotunneling and single-electron tunneling in a thermally isolated single-electron transistor leads to peculiar overheating effects. In particular, there is an interesting crossover interval where the competition between cotunneling and single-electron tunneling changes to the dominance of the latter. In this interval, the current exhibits anomalous sensitivity to the effective electron temperature of the transistor island and its fluctuations. We present a detailed study of the current and temperature fluctuations at this interesting point. The methods implemented allow for a complete characterization of the distribution of the fluctuating quantities, well beyond the Gaussian approximation. We reveal and explore the parameter range where, for sufficiently small transistor islands, the current fluctuations become gigantic. In this regime, the optimal value of the current, its expectation value, and its standard deviation differ from each other by parametrically large factors. This situation is unique for transport in nanostructures and for electron transport in general. The origin of this spectacular effect is the exponential sensitivity of the current to the fluctuating effective temperature.
Tremor activity inhibited by well-drained conditions above a megathrust
Nakajima, Junichi; Hasegawa, Akira
2016-01-01
Tremor occurs on megathrusts under conditions of near-lithostatic pore-fluid pressures and extremely weakened shear strengths. Although metamorphic reactions in the slab liberate large amounts of fluids, the mechanism for enhancing pore-fluid pressures along the megathrust to near-lithostatic values remains poorly understood. Here we show anti-correlation between low-frequency earthquake (LFE) activity and properties that are markers of the degree of metamorphism above the megathrust, whereby LFEs occur beneath the unmetamorphosed overlying plate but are rare or limited below portions that are metamorphosed. The extent of metamorphism in the overlying plate is likely controlled by along-strike contrasts in permeability. Undrained conditions are required for pore-fluid pressures to be enhanced to near-lithostatic values and for shear strength to reduce sufficiently for LFE generation, whereas well-drained conditions reduce pore-fluid pressures at the megathrust and LFEs no longer occur at the somewhat strengthened megathrust. Our observations suggest that undrained conditions are a key factor for the genesis of LFEs. PMID:27991588
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anghileri, Daniela; Voisin, Nathalie; Castelletti, Andrea F.
In this study, we develop a forecast-based adaptive control framework for Oroville reservoir, California, to assess the value of seasonal and inter-annual forecasts for reservoir operation.We use an Ensemble Streamflow Prediction (ESP) approach to generate retrospective, one-year-long streamflow forecasts based on the Variable Infiltration Capacity hydrology model. The optimal sequence of daily release decisions from the reservoir is then determined by Model Predictive Control, a flexible and adaptive optimization scheme.We assess the forecast value by comparing system performance based on the ESP forecasts with that based on climatology and a perfect forecast. In addition, we evaluate system performance based onmore » a synthetic forecast, which is designed to isolate the contribution of seasonal and inter-annual forecast skill to the overall value of the ESP forecasts.Using the same ESP forecasts, we generalize our results by evaluating forecast value as a function of forecast skill, reservoir features, and demand. Our results show that perfect forecasts are valuable when the water demand is high and the reservoir is sufficiently large to allow for annual carry-over. Conversely, ESP forecast value is highest when the reservoir can shift water on a seasonal basis.On average, for the system evaluated here, the overall ESP value is 35% less than the perfect forecast value. The inter-annual component of the ESP forecast contributes 20-60% of the total forecast value. Improvements in the seasonal component of the ESP forecast would increase the overall ESP forecast value between 15 and 20%.« less
Hydrophobic properties of a wavy rough substrate.
Carbone, G; Mangialardi, L
2005-01-01
The wetting/non-wetting properties of a liquid drop in contact with a chemically hydrophobic rough surface (thermodynamic contact angle theta(e)>pi/2) are studied for the case of an extremely idealized rough profile: the liquid drop is considered to lie on a simple sinusoidal profile. Depending on surface geometry and pressure values, it is found that the Cassie and Wenzel states can coexist. But if the amplitude h of the substrate is sufficiently large the only possible stable state is the Cassie one, whereas if h is below a certain critical value hcr a transition to the Wenzel state occurs. Since in many potential applications of such super-hydrophobic surfaces, liquid drops often collide with the substrate (e.g. vehicle windscreens), in the paper the critical drop pressure pW is calculated at which the Cassie state is no longer stable and the liquid jumps into full contact with the substrate (Wenzel state). By analyzing the asymptotic behavior of the systems in the limiting case of a large substrate corrugation, a simple criterion is also proposed to calculate the minimum height asperity h necessary to prevent the Wenzel state from being formed, to preserve the super-hydrophobic properties of the substrate, and, hence, to design a robust super-hydrophobic surface.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
NASA Astrophysics Data System (ADS)
Konrad, C. P.; Olden, J.
2013-12-01
Dams impose a host of impacts on freshwater and estuary ecosystems. In recent decades, dam releases for ecological outcomes have been increasingly implemented to mitigate for these impacts and are gaining global scope. Many are designed and conducted using an experimental framework. A recent review of large-scale flow experiments (FE) evaluates their effectiveness and identifies ways to enhance their scientific and management value. At least 113 large-scale flow experiments affecting 98 river systems globally have been documented over the last 50 years. These experiments span a range of flow manipulations from single pulse events to comprehensive changes in flow regime across all seasons and different water year types. Clear articulation of experimental objectives, while not universally practiced, was crucial for achieving management outcomes and changing dam operating policies. We found a strong disparity between the recognized ecological importance of a multi faceted flow regimes and discrete flow events that characterized 80% of FEs. Over three quarters of FEs documented both abiotic and biotic outcomes, but only one third examined multiple trophic groups, thus limiting how this information informs future dam management. Large-scale flow experiments represent a unique opportunity for integrated biophysical investigations for advancing ecosystem science. Nonetheless, they must remain responsive to site-specific issues regarding water management, evolving societal values and changing environmental conditions and, in particular, can characterize the incremental benefits from and necessary conditions for changing dam operations to improve ecological outcomes. This type of information is essential for understanding the full context of value based trade-offs in benefits and costs from different dam operations that can serve as an empirical basis for societal decisions regarding water and ecosystem management. FE may be the best approach available to managers for resolving critical uncertainties that impede decision making in adaptive settings, for example, when we lack sufficient understanding to model biophysical responses to alternative operations. Integrated long term monitoring of biotic abiotic responses and defining clear management based objectives highlight ways for improving the efficiency and value of FEs.
Samuel A. Cushman; Erin L. Landguth; Curtis H. Flather
2012-01-01
Aim: The goal of this study was to evaluate the sufficiency of the network of protected lands in the U.S. northern Rocky Mountains in providing protection for habitat connectivity for 105 hypothetical organisms. A large proportion of the landscape...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
Numerical study of heterogeneous mean temperature and shock wave in a resonator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yano, Takeru
2015-10-28
When a frequency of gas oscillation in an acoustic resonator is sufficiently close to one of resonant frequencies of the resonator, the amplitude of gas oscillation becomes large and hence the nonlinear effect manifests itself. Then, if the dissipation effects due to viscosity and thermal conductivity of the gas are sufficiently small, the gas oscillation may evolve into the acoustic shock wave, in the so-called consonant resonators. At the shock front, the kinetic energy of gas oscillation is converted into heat by the dissipation process inside the shock layer, and therefore the temperature of the gas in the resonator rises.more » Since the acoustic shock wave travels in the resonator repeatedly over and over again, the temperature rise becomes noticeable in due course of time even if the shock wave is weak. We numerically study the gas oscillation with shock wave in a resonator of square cross section by solving the initial and boundary value problem of the system of three-dimensional Navier-Stokes equations with a finite difference method. In this case, the heat conduction across the boundary layer on the wall of resonator causes a spatially heterogeneous distribution of mean (time-averaged) gas temperature.« less
Concept for maritime near-surface surveillance using water Raman scattering
Shokair, Isaac R.; Johnson, Mark S.; Schmitt, Randal L.; ...
2018-06-08
Here, we discuss a maritime surveillance and detection concept based on Raman scattering of water molecules. Using a range-gated scanning lidar that detects Raman scattered photons from water, the absence or change of signal indicates the presence of a non-water object. With sufficient spatial resolution, a two-dimensional outline of the object can be generated by the scanning lidar. Because Raman scattering is an inelastic process with a relatively large wavelength shift for water, this concept avoids the often problematic elastic scattering for objects at or very close to the water surface or from the bottom surface for shallow waters. Themore » maximum detection depth for this concept is limited by the attenuation of the excitation and return Raman light in water. If excitation in the UV is used, fluorescence can be used for discrimination between organic and non-organic objects. In this paper, we present a lidar model for this concept and discuss results of proof-of-concept measurements. Using published cross section values, the model and measurements are in reasonable agreement and show that a sufficient number of Raman photons can be generated for modest lidar parameters to make this concept useful for near-surface detection.« less
Plasma Source Development for LAPD
NASA Astrophysics Data System (ADS)
Pribyl, P.; Gekelman, W.; Drandell, M.; Grunspen, S.; Nakamoto, M.; McBarron, A.
2003-10-01
The Large Plasma Device (LAPD) relies on an indirectly heated Barium Oxide (BaO) cathode to generate an extremely repeatable low-noise plasma. However there are two defects of this system: one is that the cathode is subject to oxygen poisoning in the event of accidental air leaks, requiring a lengthy recoating and regeneration process. Second, the indirect radiative heating is only about 50 % efficient, leading to a series of reliability issues. Alternate plasma sources are being investigated, including two types of directly heated BaO cathode and several configurations of inductively coupled RF plasmas. Direct heating for a cathode can be achieved either by embedding heaters within the nickel substrate, or by using inductive heating techniques to drive currents within the nickel itself. In both cases, the BaO coating still serves to emit the electrons and thus generate the plasma arc. An improved system would generate the plasma without the use of a "cathode" e.g. by inductively coupling energy directly into the plasma discharge. This technique is being investigated from the point of view of whether a) the bulk of the plasma column can be made sufficiently low-noise to be of experimental value and b) sufficiently dense plasmas can be formed.
Concept for maritime near-surface surveillance using water Raman scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shokair, Isaac R.; Johnson, Mark S.; Schmitt, Randal L.
Here, we discuss a maritime surveillance and detection concept based on Raman scattering of water molecules. Using a range-gated scanning lidar that detects Raman scattered photons from water, the absence or change of signal indicates the presence of a non-water object. With sufficient spatial resolution, a two-dimensional outline of the object can be generated by the scanning lidar. Because Raman scattering is an inelastic process with a relatively large wavelength shift for water, this concept avoids the often problematic elastic scattering for objects at or very close to the water surface or from the bottom surface for shallow waters. Themore » maximum detection depth for this concept is limited by the attenuation of the excitation and return Raman light in water. If excitation in the UV is used, fluorescence can be used for discrimination between organic and non-organic objects. In this paper, we present a lidar model for this concept and discuss results of proof-of-concept measurements. Using published cross section values, the model and measurements are in reasonable agreement and show that a sufficient number of Raman photons can be generated for modest lidar parameters to make this concept useful for near-surface detection.« less
Fragmentation of protoplanetary discs around M-dwarfs
NASA Astrophysics Data System (ADS)
Backus, Isaac; Quinn, Thomas
2016-12-01
We investigate the conditions required for planet formation via gravitational instability (GI) and protoplanetary disc (PPD) fragmentation around M-dwarfs. Using a suite of 64 SPH simulations with 106 particles, the parameter space of disc mass, temperature, and radius is explored, bracketing reasonable values based on theory and observation. Our model consists of an equilibrium, gaseous, and locally isothermal disc orbiting a central star of mass M* = M⊙/3. Discs with a minimum Toomre Q of Qmin ≲ 0.9 will fragment and form gravitationally bound clumps. Some previous literature has found Qmin < 1.3-1.5 to be sufficient for fragmentation. Increasing disc height tends to stabilize discs, and when incorporated into Q as Qeff ∝ Q(H/R)α for α = 0.18 is sufficient to predict fragmentation. Some discrepancies in the literature regarding Qcrit may be due to different methods of generating initial conditions (ICs). A series of 15 simulations demonstrates that perturbing ICs slightly out of equilibrium can cause discs to fragment for higher Q. Our method for generating ICs is presented in detail. We argue that GI likely plays a role in PPDs around M-dwarfs and that disc fragmentation at large radii is a plausible outcome for these discs.
Simulation of FRET dyes allows quantitative comparison against experimental data
NASA Astrophysics Data System (ADS)
Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander
2018-03-01
Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.
Effect of normalized plasma frequency on electron phase-space orbits in a free-electron laser
NASA Astrophysics Data System (ADS)
Ji, Yu-Pin; Wang, Shi-Jian; Xu, Jing-Yue; Xu, Yong-Gen; Liu, Xiao-Xu; Lu, Hong; Huang, Xiao-Li; Zhang, Shi-Chang
2014-02-01
Irregular phase-space orbits of the electrons are harmful to the electron-beam transport quality and hence deteriorate the performance of a free-electron laser (FEL). In previous literature, it was demonstrated that the irregularity of the electron phase-space orbits could be caused in several ways, such as varying the wiggler amplitude and inducing sidebands. Based on a Hamiltonian model with a set of self-consistent differential equations, it is shown in this paper that the electron-beam normalized plasma frequency functions not only couple the electron motion with the FEL wave, which results in the evolution of the FEL wave field and a possible power saturation at a large beam current, but also cause the irregularity of the electron phase-space orbits when the normalized plasma frequency has a sufficiently large value, even if the initial energy of the electron is equal to the synchronous energy or the FEL wave does not reach power saturation.
Gautier, D.L.
1981-01-01
In the northern Great Plains, large quantities of biogenic methane are contained at shallow depths in Cretaceous marine mudstones. The Gammon Shale and equivalents of the Milk River Formation in Canada are typical. At Little Missouri field, Gammon reservoirs consist of discontinuous lenses and laminae of siltstone, enclosed by silty clay shale. Large amounts of allogenic clay, including highly expansible mixed-layer illite-smectite cause great water sensitivity and high water-saturation values. Studies show that the Gammon has not undergone thermal conditions sufficient for oil or thermal gas generation. The scarcity of authigenic silicates suggests that diagenesis has been inhibited by the presence of free methane. Shale layers are practically impermeable whereas siltstone microlenses are porous (30-40%) and have permeabilities on the order of 3-30 md. Organic matter in the low-permeability reservoirs served as the source of biogenic methane, and capillary forces acted as the trapping mechanism for gas accumulation. Much of the Gammon interval is potentially economic. -from Author
Risk and Protective Factors Influencing Life Skills among Youths in Long-Term Foster Care.
ERIC Educational Resources Information Center
Nollan, K. A.; Pecora, P. J.; Nurius, P. N.; Whittaker, J. K.
2002-01-01
Examined through mail surveys of youth, parents, and social workers the predictive value of selected risk and protective factors in explaining self-sufficiency skills of 219 ethnically diverse 12- to 15-year-olds in foster care. Found that protective factors related to greater self-sufficiency skills, and risk factors were negatively associated.…
ERIC Educational Resources Information Center
Nord, Derek; Luecking, Richard; Mank, David; Kiernan, William; Wray, Christina
2013-01-01
Employment, career advancement, and financial independence are highly valued in the United States. As expectations, they are often instilled at a young age and incentivized throughout adulthood. Despite their importance, employment and economic sufficiency continue to be out of reach for most people with intellectual and developmental disabilities…
Hand coverage by alcohol-based handrub varies: Volume and hand size matter.
Zingg, Walter; Haidegger, Tamas; Pittet, Didier
2016-12-01
Visitors of an infection prevention and control conference performed hand hygiene with 1, 2, or 3 mL ultraviolet light-traced alcohol-based handrub. Coverage of palms, dorsums, and fingertips were measured by digital images. Palms of all hand sizes were sufficiently covered when 2 mL was applied, dorsums of medium and large hands were never sufficiently covered. Palmar fingertips were sufficiently covered when 2 or 3 mL was applied, and dorsal fingertips were never sufficiently covered. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Veneziano, D.; Langousis, A.; Lepore, C.
2009-12-01
The annual maximum of the average rainfall intensity in a period of duration d, Iyear(d), is typically assumed to have generalized extreme value (GEV) distribution. The shape parameter k of that distribution is especially difficult to estimate from either at-site or regional data, making it important to constraint k using theoretical arguments. In the context of multifractal representations of rainfall, we observe that standard theoretical estimates of k from extreme value (EV) and extreme excess (EE) theories do not apply, while estimates from large deviation (LD) theory hold only for very small d. We then propose a new theoretical estimator based on fitting GEV models to the numerically calculated distribution of Iyear(d). A standard result from EV and EE theories is that k depends on the tail behavior of the average rainfall in d, I(d). This result holds if Iyear(d) is the maximum of a sufficiently large number n of variables, all distributed like I(d); therefore its applicability hinges on whether n = 1yr/d is large enough and the tail of I(d) is sufficiently well known. One typically assumes that at least for small d the former condition is met, but poor knowledge of the upper tail of I(d) remains an obstacle for all d. In fact, in the case of multifractal rainfall, also the first condition is not met because, irrespective of d, 1yr/d is too small (Veneziano et al., 2009, WRR, in press). Applying large deviation (LD) theory to this multifractal case, we find that, as d → 0, Iyear(d) approaches a GEV distribution whose shape parameter kLD depends on a region of the distribution of I(d) well below the upper tail, is always positive (in the EV2 range), is much larger than the value predicted by EV and EE theories, and can be readily found from the scaling properties of I(d). The scaling properties of rainfall can be inferred also from short records, but the limitation remains that the result holds under d → 0 not for finite d. Therefore, for different reasons, none of the above asymptotic theories applies to Iyear(d). In practice, one is interested in the distribution of Iyear(d) over a finite range of averaging durations d and return periods T. Using multifractal representations of rainfall, we have numerically calculated the distribution of Iyear(d) and found that, although not GEV, the distribution can be accurately approximated by a GEV model. The best-fitting parameter k depends on d, but is insensitive to the scaling properties of rainfall and the range of return periods T used for fitting. We have obtained a default expression for k(d) and compared it with estimates from historical rainfall records. The theoretical function tracks well the empirical dependence on d, although it generally overestimates the empirical k values, possibly due to deviations of rainfall from perfect scaling. This issue is under investigation.
OXIDATION OF TRANSURANIC ELEMENTS
Moore, R.L.
1959-02-17
A method is reported for oxidizing neptunium or plutonium in the presence of cerous values without also oxidizing the cerous values. The method consists in treating an aqueous 1N nitric acid solution, containing such cerous values together with the trivalent transuranic elements, with a quantity of hydrogen peroxide stoichiometrically sufficient to oxidize the transuranic values to the hexavalent state, and digesting the solution at room temperature.
NASA Astrophysics Data System (ADS)
Nampally, Subhadra; Padhy, Simanchal; Dimri, Vijay P.
2018-01-01
The nature of spatial distribution of heterogeneities in the source area of the 2015 Nepal earthquake is characterized based on the seismic b-value and fractal analysis of its aftershocks. The earthquake size distribution of aftershocks gives a b-value of 1.11 ± 0.08, possibly representing the highly heterogeneous and low stress state of the region. The aftershocks exhibit a fractal structure characterized by a spectrum of generalized dimensions, Dq varying from D2 = 1.66 to D22 = 0.11. The existence of a fractal structure suggests that the spatial distribution of aftershocks is not a random phenomenon, but it self-organizes into a critical state, exhibiting a scale-independent structure governed by a power-law scaling, where a small perturbation in stress is sufficient enough to trigger aftershocks. In order to obtain the bias in fractal dimensions resulting from finite data size, we compared the multifractal spectrum for the real data and random simulations. On comparison, we found that the lower limit of bias in D2 is 0.44. The similarity in their multifractal spectra suggests the lack of long-range correlation in the data, with an only weakly multifractal or a monofractal with a single correlation dimension D2 characterizing the data. The minimum number of events required for a multifractal process with an acceptable error is discussed. We also tested for a possible correlation between changes in D2 and energy released during the earthquakes. The values of D2 rise during the two largest earthquakes (M > 7.0) in the sequence. The b- and D2 values are related by D2 = 1.45 b that corresponds to the intermediate to large earthquakes. Our results provide useful constraints on the spatial distribution of b- and D2-values, which are useful for seismic hazard assessment in the aftershock area of a large earthquake.
Clinical applications of three-dimensional tortuosity metrics
NASA Astrophysics Data System (ADS)
Dougherty, Geoff; Johnson, Michael J.
2007-03-01
The measurement of abnormal vascular tortuosity is important in the diagnosis of many diseases. Metrics based on three-dimensional (3-D) curvature, using approximate polynomial spline-fitting to "data balls" centered along the mid-line of the vessel, minimize digitization errors and give tortuosity values largely independent of the resolution of the imaging system. In order to establish their clinical validity we applied them to a number of clinical vascular systems, using both 2-D (standard angiograms and retinal images) and 3-D datasets (from computed tomography angiography (CTA) and magnetic resonance angiography (MRA)). Using the abdominal aortograms we found that the metrics correlated well with the ranking of an expert panel of three vascular surgeons. Both the mean curvature and the root-mean square curvature provided good discrimination between vessels of different tortuosity: and using a data ball size of one-quarter of the local vessel radius in the spline fitting gave consistent results. Tortuous retinal vessels resulting from retinitis or diabetes, but not from vasculitis, could be distinguished from normal vessels. Tortuosity values based on 3-D data sets gave higher values than their 2-D projections, and could easily be implemented in automatic measurement. They produced values sufficiently discriminating to assess the relative utility of arteries for endoluminal repair of aneurysms.
Controllability of switched singular mix-valued logical control networks with constraints
NASA Astrophysics Data System (ADS)
Deng, Lei; Gong, Mengmeng; Zhu, Peiyong
2018-03-01
The present paper investigates the controllability problem of switched singular mix-valued logical control networks (SSMLCNs) with constraints on states and controls. First, using the semi-tenser product (STP) of matrices, the SSMLCN is expressed in an algebraic form, based on which a necessary and sufficient condition is given for the uniqueness of solution of SSMLCNs. Second, a necessary and sufficient criteria is derived for the controllability of constrained SSMLCNs, by converting a constrained SSMLCN into a parallel constrained switched mix-valued logical control network. Third, an algorithm is presented to design a proper switching sequence and a control scheme which force a state to a reachable state. Finally, a numerical example is given to demonstrate the efficiency of the results obtained in this paper.
Are Young Muslims Adopting Australian Values?
ERIC Educational Resources Information Center
Kabir, Nahid Afrose
2008-01-01
Recently politicians in Australia have raised concerns that some Muslims are not adopting Australian values to a sufficient extent. In this paper I explore the notion of Australian values with respect to immigrant youth. By analysing interviews with 32 Muslim students who are 15-18 years of age and of diverse backgrounds in two state schools in…
NASA Astrophysics Data System (ADS)
Lokko, Mae-ling Jovenes
As global quantities of waste by-products from food production as well as the range of their applications increase, researchers are realizing critical opportunities to transform the burden of underutilized wastes into ecological profits. Within the tropical hot-humid region, where half the world's current and projected future population growth is concentrated, there is a dire demand for building materials to meet ambitious development schemes and rising housing deficits. However, the building sector has largely overlooked the potential of local agricultural wastes to serve as alternatives to energy-intensive, imported building technologies. Industrial ecologists have recently investigated the use of agrowaste biocomposites to replace conventional wood products that use harmful urea-formaldehyde, phenolic and isocyanate resins. Furthermore, developments in the performance of building material systems with respect to cost, energy, air quality management and construction innovation have evolved metrics about what constitutes material 'upcycling' within building life cycle. While these developments have largely been focused on technical and cost performance, much less attention has been paid to addressing deeply-seated social and cultural barriers to adoption that have sedimented over decades of importation. This dissertation evaluates the development coconut agricultural building material systems in four phases: (i) non-toxic, low-energy production of medium-high density boards (500-1200 kg/m3) from coconut fibers and emerging biobinders; (ii) characterization and evaluation of coconut agricultural building materials hygrothermal performance (iii) scaled-up design development of coconut modular building material systems and (iv) development of a value translation framework for the bottom-up distribution of value to stakeholders within the upcycling framework. This integrated design methodological approach is significant to develop ecological thinking around agrowaste building materials, influence social and cultural acceptability and create value translation frameworks that sufficiently characterize the composite value proposition of upcycled building systems.
Hino, Kimihiro; Lee, Jung Su; Asami, Yasushi
2017-12-01
People's year-round interpersonal step count variations according to meteorological conditions are not fully understood, because complete year-round data from a sufficient sample of the general population are difficult to acquire. This study examined the associations between meteorological conditions and objectively measured step counts using year-round data collected from a large cohort ( N = 24,625) in Yokohama, Japan from April 2015 to March 2016. Two-piece linear regression analysis was used to examine the associations between the monthly median daily step count and three meteorological indices (mean values of temperature, temperature-humidity index (THI), and net effective temperature (NET)). The number of steps per day peaked at temperatures between 19.4 and 20.7 °C. At lower temperatures, the increase in steps per day was between 46.4 and 52.5 steps per 1 °C increase. At temperatures higher than those at which step counts peaked, the decrease in steps per day was between 98.0 and 187.9 per 1 °C increase. Furthermore, these effects were more obvious in elderly than non-elderly persons in both sexes. A similar tendency was seen when using THI and NET instead of temperature. Among the three meteorological indices, the highest R 2 value with step counts was observed with THI in all four groups. Both high and low meteorological indices discourage people from walking and higher values of the indices adversely affect step count more than lower values, particularly among the elderly. Among the three indices assessed, THI best explains the seasonal fluctuations in step counts.
Cytosolic androgen receptor in regenerating rat levator ani muscle.
Max, S R; Mufti, S; Carlson, B M
1981-01-01
The development of the cytosolic androgen receptor was studied after degeneration and regeneration of the rat levator ani muscle after a crush lesion. Muscle regeneration appears to recapitulate myogenesis in many respects. It therefore provides a model tissue in sufficiently in large quantity for investigating the ontogenesis of the androgen receptor. The receptor in the cytosol of the normal levator ani muscle has binding characteristics similar to those of the cytosolic receptor in other androgen-sensitive tissues. By day 3 after a crush lesion of the levator ani muscle, androgen binding decreased to 25% of control values. This decrease was followed by a 4-5 fold increase in hormone binding, which attained control values by day 7 after crush. Androgen binding remained stable at the control value up to day 60 after crushing. These results were correlated with the morphological development of the regenerating muscle after crushing. It is concluded that there is little, if any, androgen receptor present in the early myoblastic stages of regeneration; rather, synthesis of the receptor may occur after the fusion of myoblasts and during the differentiation of myotubes into cross-striated muscle fibres. Images PLATE 1 PLATE 2 PMID:6977357
Graphene-based composite materials.
Stankovich, Sasha; Dikin, Dmitriy A; Dommett, Geoffrey H B; Kohlhaas, Kevin M; Zimney, Eric J; Stach, Eric A; Piner, Richard D; Nguyen, SonBinh T; Ruoff, Rodney S
2006-07-20
Graphene sheets--one-atom-thick two-dimensional layers of sp2-bonded carbon--are predicted to have a range of unusual properties. Their thermal conductivity and mechanical stiffness may rival the remarkable in-plane values for graphite (approximately 3,000 W m(-1) K(-1) and 1,060 GPa, respectively); their fracture strength should be comparable to that of carbon nanotubes for similar types of defects; and recent studies have shown that individual graphene sheets have extraordinary electronic transport properties. One possible route to harnessing these properties for applications would be to incorporate graphene sheets in a composite material. The manufacturing of such composites requires not only that graphene sheets be produced on a sufficient scale but that they also be incorporated, and homogeneously distributed, into various matrices. Graphite, inexpensive and available in large quantity, unfortunately does not readily exfoliate to yield individual graphene sheets. Here we present a general approach for the preparation of graphene-polymer composites via complete exfoliation of graphite and molecular-level dispersion of individual, chemically modified graphene sheets within polymer hosts. A polystyrene-graphene composite formed by this route exhibits a percolation threshold of approximately 0.1 volume per cent for room-temperature electrical conductivity, the lowest reported value for any carbon-based composite except for those involving carbon nanotubes; at only 1 volume per cent, this composite has a conductivity of approximately 0.1 S m(-1), sufficient for many electrical applications. Our bottom-up chemical approach of tuning the graphene sheet properties provides a path to a broad new class of graphene-based materials and their use in a variety of applications.
Stark, Peter C.; Kuske, Cheryl R.; Mullen, Kenneth I.
2002-01-01
A method for quantitating dsDNA in an aqueous sample solution containing an unknown amount of dsDNA. A first aqueous test solution containing a known amount of a fluorescent dye-dsDNA complex and at least one fluorescence-attenutating contaminant is prepared. The fluorescence intensity of the test solution is measured. The first test solution is diluted by a known amount to provide a second test solution having a known concentration of dsDNA. The fluorescence intensity of the second test solution is measured. Additional diluted test solutions are similarly prepared until a sufficiently dilute test solution having a known amount of dsDNA is prepared that has a fluorescence intensity that is not attenuated upon further dilution. The value of the maximum absorbance of this solution between 200-900 nanometers (nm), referred to herein as the threshold absorbance, is measured. A sample solution having an unknown amount of dsDNA and an absorbance identical to that of the sufficiently dilute test solution at the same chosen wavelength is prepared. Dye is then added to the sample solution to form the fluorescent dye-dsDNA-complex, after which the fluorescence intensity of the sample solution is measured and the quantity of dsDNA in the sample solution is determined. Once the threshold absorbance of a sample solution obtained from a particular environment has been determined, any similarly prepared sample solution taken from a similar environment and having the same value for the threshold absorbance can be quantified for dsDNA by adding a large excess of dye to the sample solution and measuring its fluorescence intensity.
Advancing the Guánica Bay (Puerto Rico) Watershed Management Plan
Consideration of stakeholder values in watershed planning and management is a necessity, but sufficiently eliciting, understanding, and organizing those values can be daunting. Many studies have demonstrated the usefulness of formal decision analysis to integrate expert knowledge...
Electromagnetic signals from bare strange stars
NASA Astrophysics Data System (ADS)
Mannarelli, Massimo; Pagliaroli, Giulia; Parisi, Alessandro; Pilo, Luigi
2014-05-01
The crystalline color superconducting phase is believed to be the ground state of deconfined quark matter for sufficiently large values of the strange quark mass. This phase has the remarkable property of being more rigid than any known material. It can therefore sustain large shear stresses, supporting torsional oscillations of large amplitude. The torsional oscillations could lead to observable electromagnetic signals if strange stars have a crystalline color superconducting crust. Indeed, considering a simple model of a strange star with a bare quark matter surface, it turns out that a positive charge is localized in a narrow shell about ten Fermi thick beneath the star surface. The electrons needed to neutralize the positive charge of quarks spill in the star exterior forming an electromagnetically bounded atmosphere hundreds of Fermi thick. When a torsional oscillation is excited, for example by a stellar glitch, the positive charge oscillates with typical kHz frequencies, for a crust thickness of about one-tenth of the stellar radius, to hundreds of Hz, for a crust thickness of about nine-tenths of the stellar radius. Higher frequencies, of the order of few GHz, can be reached if the star crust is of the order of a few centimeters thick. We estimate the emitted power considering emission by an oscillating magnetic dipole, finding that it can be quite large, of the order of 1045 erg/s for a thin crust. The associated relaxation times are very uncertain, with values ranging between microseconds and minutes, depending on the crust thickness. The radiated photons will be in part absorbed by the electronic atmosphere, but a sizable fraction of them should be emitted by the star.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
NMSSM interpretation of the Galactic Center excess
NASA Astrophysics Data System (ADS)
Cheung, Clifford; Papucci, Michele; Sanford, David; Shah, Nausheen R.; Zurek, Kathryn M.
2014-10-01
We explore models for the GeV Galactic Center excess (GCE) observed by the Fermi Telescope, focusing on χχ→ff ¯ annihilation processes in the Z3 next-to-minimal supersymmetric standard model (NMSSM). We begin by examining the requirements for a simplified model [parametrized by the couplings and masses of dark matter (DM) and mediator particles] to reproduce the GCE via χχ→ff ¯, while simultaneously thermally producing the observed relic abundance. We apply the results of our simplified model to the Z3 NMSSM for singlino/Higgsino (S/H) or bino/Higgsino (B/H) DM. In the case of S/H DM, we find that the DM must be very close to a pseudoscalar resonance to be viable, and large tanβ and positive values of μ are preferred for evading direct detection constraints while simultaneously obtaining the observed Higgs mass. In the case of B/H DM, by contrast, the situation is much less tuned: annihilation generally occurs off resonance, and for large tanβ, direct detection constraints are easily satisfied by choosing μ sufficiently large and negative. The B/H model generally has a light, largely MSSM-like pseudoscalar with no accompanying charged Higgs, which could be searched for at the LHC.
Iterative h-minima-based marker-controlled watershed for cell nucleus segmentation.
Koyuncu, Can Fahrettin; Akhan, Ece; Ersahin, Tulin; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2016-04-01
Automated microscopy imaging systems facilitate high-throughput screening in molecular cellular biology research. The first step of these systems is cell nucleus segmentation, which has a great impact on the success of the overall system. The marker-controlled watershed is a technique commonly used by the previous studies for nucleus segmentation. These studies define their markers finding regional minima on the intensity/gradient and/or distance transform maps. They typically use the h-minima transform beforehand to suppress noise on these maps. The selection of the h value is critical; unnecessarily small values do not sufficiently suppress the noise, resulting in false and oversegmented markers, and unnecessarily large ones suppress too many pixels, causing missing and undersegmented markers. Because cell nuclei show different characteristics within an image, the same h value may not work to define correct markers for all the nuclei. To address this issue, in this work, we propose a new watershed algorithm that iteratively identifies its markers, considering a set of different h values. In each iteration, the proposed algorithm defines a set of candidates using a particular h value and selects the markers from those candidates provided that they fulfill the size requirement. Working with widefield fluorescence microscopy images, our experiments reveal that the use of multiple h values in our iterative algorithm leads to better segmentation results, compared to its counterparts. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Fokkema, M R; Smit, E N; Martini, I A; Woltil, H A; Boersma, E R; Muskiet, F A J
2002-11-01
Early suspicion of essential fatty acid deficiency (EFAD) or omega3-deficiency may rather focus on polyunsaturated fatty acid (PUFA) or long-chain PUFA (LCP) analyses than clinical symptoms. We determined cut-off values for biochemical EFAD, omega3-and omega3/22:6omega3 [docosahexaenoic acid (DHA)]-deficiency by measurement of erythrocyte 20:3omega9 (Mead acid), 22:5omega6/20:4omega6 and 22:5omega6/22:6omega3, respectively. Cut-off values, based on 97.5 percentiles, derived from an apparently healthy omnivorous group (six Dominica breast-fed newborns, 32 breast-fed and 27 formula+LCP-fed Dutch low-birth-weight infants, 31 Jerusalem infants, 33 Dutch 3.5-year-old infants, 69 omnivorous Dutch adults and seven Dominica mothers) and an apparently healthy group with low dietary LCP intake (81 formula-fed Dutch low-birth-weight infants, 12 Dutch vegans). Cut-off values were evaluated by their application in an EFAD suspected group of 108, mostly malnourished, Pakistani children, three pediatric patients with chronic fat-malabsorption (abetal-ipoproteinemia, congenital jejunal and biliary atresia) and one patient with a peroxisomal beta-oxidation disorder. Erythrocyte 20:3omega9, 22:5omega6/20:4omega6 and 22:5omega6/22:6omega3 proved age-dependent up to 0.2 years. Cut-off values for ages above 0.2 years were: 0.46mol% 20:3omega9 for EFAD, 0.068mol/mol 22:5omega6/20:4omega6 for omega3-deficiency, 0.22mol/mol 22:5omega6/22:6omega3 for omega3/DHA-marginality and 0.48mol/mol 22:5omega6/22:6omega3 for omega3/DHA-deficiency. Use of RBC 20:3omega9 and 22:5omega6/20:4omega6 cut-off values identified 20.4% of the Pakistani subjects as EFAD+omega3-deficient, 12.9% as EFAD+omega3-sufficient, 38.9% as EFA-sufficient+omega3-deficient and 27.8% as EFA-sufficient+omega3-sufficient. The patient with the peroxisomal disorder was classified as EFA-sufficient, omega3-sufficient (based on RBC 22:5omega6/20:4omega6) and omega3/DHA-deficient (based on RBC 22:5omega6/22:6omega3). The three other pediatric patients were classified as EFAD, omega3-deficient and omega3/DHA-deficient. Use of the combination of the present cut-off values for EFA, omega3 and omega3/DHA status assessment, as based on 97.5 percentiles, may serve for PUFA supplement intervention until better concepts have emerged.
Preliminary study of a large span-distributed-load flying-wing cargo airplane concept
NASA Technical Reports Server (NTRS)
Jernell, L. S.
1978-01-01
An aircraft capable of transporting containerized cargo over intercontinental distances is analyzed. The specifications for payload weight, density, and dimensions in essence configure the wing and establish unusually low values of wing loading and aspect ratio. The structural weight comprises only about 18 percent of the design maximum gross weight. Although the geometric aspect ratio is 4.53, the winglet effect of the wing-tip-mounted vertical tails, increase the effective aspect ratio to approximately 7.9. Sufficient control power to handle the large rolling moment of inertia dictates a relatively high minimum approach velocity of 315 km/hr (170 knots). The airplane has acceptable spiral, Dutch roll, and roll-damping modes. A hardened stability augmentation system is required. The most significant noise source is that of the airframe. However, for both take-off and approach, the levels are below the FAR-36 limit of 108 db. The design mission fuel efficiency is approximately 50 percent greater than that of the most advanced, currently operational, large freighter aircraft. The direct operating cost is significantly lower than that of current freighters, the advantage increasing as fuel price increases.
Preliminary study of a large span-distributed-load flying-wing cargo airplane concept
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernell, L.S.
1978-05-01
An aircraft capable of transporting containerized cargo over intercontinental distances is analyzed. The specifications for payload weight, density, and dimensions in essence configure the wing and establish unusually low values of wing loading and aspect ratio. The structural weight comprises only about 18 percent of the design maximum gross weight. Although the geometric aspect ratio is 4.53, the winglet effect of the wing-tip-mounted vertical tails, increase the effective aspect ratio to approximately 7.9. Sufficient control power to handle the large rolling moment of inertia dictates a relatively high minimum approach velocity of 315 km/hr (170 knots). The airplane has acceptablemore » spiral, Dutch roll, and roll-damping modes. A hardened stability augmentation system is required. The most significant noise source is that of the airframe. However, for both take-off and approach, the levels are below the FAR-36 limit of 108 db. The design mission fuel efficiency is approximately 50 percent greater than that of the most advanced, currently operational, large freighter aircraft. The direct operating cost is significantly lower than that of current freighters, the advantage increasing as fuel price increases.« less
Assessment of dynamic closure for premixed combustion large eddy simulation
NASA Astrophysics Data System (ADS)
Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan
2015-09-01
Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.
16 CFR 233.2 - Retail price comparisons; comparable value comparisons.
Code of Federal Regulations, 2010 CFR
2010-01-01
... which substantial sales of the article are being made in the area—that is, a sufficient number of sales... technique. Retailer Doe advertises Brand X pens as having a “Retail Value $15.00, My Price $7.50,” when the...
NASA Technical Reports Server (NTRS)
Baskaran, S.
1974-01-01
The cut-off frequencies for high order circumferential modes were calculated for various eccentricities of an elliptic duct section. The problem was studied with a view to the reduction of jet engine compressor noise by elliptic ducts, instead of circular ducts. The cut-off frequencies for even functions decrease with increasing eccentricity. The third order eigen frequencies are oscillatory as the eccentricity increases for odd functions. The eigen frequencies decrease for higher order odd functions inasmuch as, for higher orders, they assume the same values as those for even functions. Deformation of a circular pipe into an elliptic one of sufficiently large eccentricity produces only a small reduction in the cut-off frequency, provided the area of the pipe section is kept invariable.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2008-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points, the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2008-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.
NASA Astrophysics Data System (ADS)
Dahlqvist, Per
1999-10-01
We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.
Hopping in the Crowd to Unveil Network Topology.
Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco
2018-04-13
We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.
NASA Astrophysics Data System (ADS)
Sánchez, R.; Newman, D. E.; Mier, J. A.
2018-05-01
Fractional transport equations are used to build an effective model for transport across the running sandpile cellular automaton [Hwa et al., Phys. Rev. A 45, 7002 (1992), 10.1103/PhysRevA.45.7002]. It is shown that both temporal and spatial fractional derivatives must be considered to properly reproduce the sandpile transport features, which are governed by self-organized criticality, at least over sufficiently long or large scales. In contrast to previous applications of fractional transport equations to other systems, the specifics of sand motion require in this case that the spatial fractional derivatives used for the running sandpile must be of the completely asymmetrical Riesz-Feller type. Appropriate values for the fractional exponents that define these derivatives in the case of the running sandpile are obtained numerically.
Higgs boson gluon-fusion production in QCD at three loops.
Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; Herzog, Franz; Mistlberger, Bernhard
2015-05-29
We present the cross section for the production of a Higgs boson at hadron colliders at next-to-next-to-next-to-leading order (N^{3}LO) in perturbative QCD. The calculation is based on a method to perform a series expansion of the partonic cross section around the threshold limit to an arbitrary order. We perform this expansion to sufficiently high order to obtain the value of the hadronic cross at N^{3}LO in the large top-mass limit. For renormalization and factorization scales equal to half the Higgs boson mass, the N^{3}LO corrections are of the order of +2.2%. The total scale variation at N^{3}LO is 3%, reducing the uncertainty due to missing higher order QCD corrections by a factor of 3.
On the deformation of fluctuating chiral ribbons
NASA Astrophysics Data System (ADS)
Panyukov, S.; Rabin, Y.
2002-02-01
A theoretical analysis of the effect of force and torque on fluctuating chiral ribbons is presented. We find that when a filament with a straight centerline and a spontaneously twisted noncircular cross-section is subjected to a sufficiently strong extensional force, it exhibits an asymmetric response to large degrees of overwinding and unwinding. We construct the stability diagram that describes the buckling transition of such ribbons under the opposing action of force and torque and show that all the observed behaviors can be understood in terms of continuous transformations between straight and spiral states of the ribbon. The relation between our results and experimental observations on DNA is discussed and a new re-entrant spiral-to-rod transition is predicted at intermediate values of twist rigidity and applied force.
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Partition functions with spin in AdS2 via quasinormal mode methods
Keeler, Cynthia; Lisbão, Pedro; Ng, Gim Seng
2016-10-12
We extend the results of [1], computing one loop partition functions for massive fields with spin half in AdS 2 using the quasinormal mode method proposed by Denef, Hartnoll, and Sachdev [2]. We find the finite representations of SO(2,1) for spin zero and spin half, consisting of a highest weight state |hi and descendants with non-unitary values of h. These finite representations capture the poles and zeroes of the one loop determinants. Together with the asymptotic behavior of the partition functions (which can be easily computed using a large mass heat kernel expansion), these are sufficient to determine the fullmore » answer for the one loop determinants. We also discuss extensions to higher dimensional AdS 2n and higher spins.« less
Visible-light absorption and large band-gap bowing of GaN 1-xSb x from first principles
Sheetz, R. Michael; Richter, Ernst; Andriotis, Antonis N.; ...
2011-08-01
Applicability of the Ga(Sb x)N 1-x alloys for practical realization of photoelectrochemical water splitting is investigated using first-principles density functional theory incorporating the local density approximation and generalized gradient approximation plus the Hubbard U parameter formalism. Our calculations reveal that a relatively small concentration of Sb impurities is sufficient to achieve a significant narrowing of the band gap, enabling absorption of visible light. Theoretical results predict that Ga(Sb x)N 1-x alloys with 2-eV band gaps straddle the potential window at moderate to low pH values, thus indicating that dilute Ga(Sb x)N 1-x alloys could be potential candidates for splitting watermore » under visible light irradiation.« less
Hopping in the Crowd to Unveil Network Topology
NASA Astrophysics Data System (ADS)
Asllani, Malbor; Carletti, Timoteo; Di Patti, Francesca; Fanelli, Duccio; Piazza, Francesco
2018-04-01
We introduce a nonlinear operator to model diffusion on a complex undirected network under crowded conditions. We show that the asymptotic distribution of diffusing agents is a nonlinear function of the nodes' degree and saturates to a constant value for sufficiently large connectivities, at variance with standard diffusion in the absence of excluded-volume effects. Building on this observation, we define and solve an inverse problem, aimed at reconstructing the a priori unknown connectivity distribution. The method gathers all the necessary information by repeating a limited number of independent measurements of the asymptotic density at a single node, which can be chosen randomly. The technique is successfully tested against both synthetic and real data and is also shown to estimate with great accuracy the total number of nodes.
ERIC Educational Resources Information Center
Astley, Jeff, Ed.; Francis, Leslie J., Ed.; Robbins, Mandy, Ed.; Selcuk, Mualla, Ed.
2012-01-01
Religious educators today are called upon to enable young people to develop as fully-rounded human beings in a multicultural and multi-faith world. It is no longer sufficient to teach about the history of religions: religion is not relegated to the past. It is no longer sufficient to teach about the observable outward phenomena of religions:…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, King C.; Liu, Da-Jiang; Thiel, Patricia A.
Diffusion coefficients, D N, for 2D vacancy nanopits are compared with those for 2D homoepitaxial adatom nanoislands on metal(100) surfaces, focusing on the variation of DN with size, N. Here, N is measured in missing atoms for pits and adatoms for islands. Analysis of D N is based on kinetic Monte Carlo simulations of a tailored stochastic lattice-gas model, where pit and island diffusion are mediated by periphery diffusion, i.e., by edge atom hopping. Precise determination of D N versus N for typical parameters reveals a cyclical variation with an overall decrease in magnitude for increasing moderate O(10 2) ≤more » N ≤ O(10 3). Monotonic decay, D N~ N -β, is found for N ≥ O(10 2) with effective exponents, β = β eff, for both pits and islands, both well below the macroscopic value of βmacro = 3/2. D N values for vacancy pits are significantly lower (higher) than for adatom islands for moderate N in the case of low (high) kink rounding barrier. However, D N values for pits and islands slowly merge, and β eff → 3/2 for sufficiently large N. The latter feature is expected from continuum Langevin formulations appropriate for large sizes. Finally, we compare predictions from our model incorporating appropriate energetic parameters for Ag(100) with different sets of experimental data for diffusivity at 300 K, including assessment of β eff, for experimentally observed sizes N from ~100 to ~1000.« less
The Gap Procedure: for the identification of phylogenetic clusters in HIV-1 sequence data.
Vrbik, Irene; Stephens, David A; Roger, Michel; Brenner, Bluma G
2015-11-04
In the context of infectious disease, sequence clustering can be used to provide important insights into the dynamics of transmission. Cluster analysis is usually performed using a phylogenetic approach whereby clusters are assigned on the basis of sufficiently small genetic distances and high bootstrap support (or posterior probabilities). The computational burden involved in this phylogenetic threshold approach is a major drawback, especially when a large number of sequences are being considered. In addition, this method requires a skilled user to specify the appropriate threshold values which may vary widely depending on the application. This paper presents the Gap Procedure, a distance-based clustering algorithm for the classification of DNA sequences sampled from individuals infected with the human immunodeficiency virus type 1 (HIV-1). Our heuristic algorithm bypasses the need for phylogenetic reconstruction, thereby supporting the quick analysis of large genetic data sets. Moreover, this fully automated procedure relies on data-driven gaps in sorted pairwise distances to infer clusters, thus no user-specified threshold values are required. The clustering results obtained by the Gap Procedure on both real and simulated data, closely agree with those found using the threshold approach, while only requiring a fraction of the time to complete the analysis. Apart from the dramatic gains in computational time, the Gap Procedure is highly effective in finding distinct groups of genetically similar sequences and obviates the need for subjective user-specified values. The clusters of genetically similar sequences returned by this procedure can be used to detect patterns in HIV-1 transmission and thereby aid in the prevention, treatment and containment of the disease.
Lai, King C.; Liu, Da-Jiang; Thiel, Patricia A.; ...
2018-02-22
Diffusion coefficients, D N, for 2D vacancy nanopits are compared with those for 2D homoepitaxial adatom nanoislands on metal(100) surfaces, focusing on the variation of DN with size, N. Here, N is measured in missing atoms for pits and adatoms for islands. Analysis of D N is based on kinetic Monte Carlo simulations of a tailored stochastic lattice-gas model, where pit and island diffusion are mediated by periphery diffusion, i.e., by edge atom hopping. Precise determination of D N versus N for typical parameters reveals a cyclical variation with an overall decrease in magnitude for increasing moderate O(10 2) ≤more » N ≤ O(10 3). Monotonic decay, D N~ N -β, is found for N ≥ O(10 2) with effective exponents, β = β eff, for both pits and islands, both well below the macroscopic value of βmacro = 3/2. D N values for vacancy pits are significantly lower (higher) than for adatom islands for moderate N in the case of low (high) kink rounding barrier. However, D N values for pits and islands slowly merge, and β eff → 3/2 for sufficiently large N. The latter feature is expected from continuum Langevin formulations appropriate for large sizes. Finally, we compare predictions from our model incorporating appropriate energetic parameters for Ag(100) with different sets of experimental data for diffusivity at 300 K, including assessment of β eff, for experimentally observed sizes N from ~100 to ~1000.« less
Parametric Methods for Determining the Characteristics of Long-Term Metal Strength
NASA Astrophysics Data System (ADS)
Nikitin, V. I.; Rybnikov, A. I.
2018-06-01
A large number of parametric methods were proposed to calculate the characteristics of the long-term strength of metals. All of them are based on the fact that temperature and time are mutually compensating factors in the processes of metal degradation at high temperature under the action of a constant stress. The analysis of the well-known Larson-Miller, Dorn-Shcherby, Menson-Haferd, Graham-Wallace, and Trunin parametric equations is performed. The widely used Larson-Miller parameter was subjected to a detailed analysis. The application of this parameter to the calculation of ultimate long-term strength for steels and alloys is substantiated provided that the laws of exponential dependence on temperature and power dependence on strength for the heat resistance are observed. It is established that the coefficient C in the Larson- Miller equation is a characteristic of the heat resistance and is different for each material. Therefore, the use of a universal constant C = 20 in parametric calculations, as well as an a priori presetting of numerical C values for each individual group of materials, is unacceptable. It is shown in what manner it is possible to determine an exact value of coefficient C for any material of interest as well as to obtain coefficient C depending on stress in case such a dependence is manifested. At present, the calculation of long-term strength characteristics can be performed to a sufficient accuracy using Larson-Miller's parameter and its refinements described therein as well as on the condition that a linear law in logσ- P dependence is observed and calculations in the interpolation range is performed. The use of the presented recommendations makes it possible to obtain a linear parametric logσ- P dependence, which makes it possible to determine to a sufficient accuracy the values of ultimate long-term strength for different materials.
Magnitude Estimation for Large Earthquakes from Borehole Recordings
NASA Astrophysics Data System (ADS)
Eshaghi, A.; Tiampo, K. F.; Ghofrani, H.; Atkinson, G.
2012-12-01
We present a simple and fast method for magnitude determination technique for earthquake and tsunami early warning systems based on strong ground motion prediction equations (GMPEs) in Japan. This method incorporates borehole strong motion records provided by the Kiban Kyoshin network (KiK-net) stations. We analyzed strong ground motion data from large magnitude earthquakes (5.0 ≤ M ≤ 8.1) with focal depths < 50 km and epicentral distances of up to 400 km from 1996 to 2010. Using both peak ground acceleration (PGA) and peak ground velocity (PGV) we derived GMPEs in Japan. These GMPEs are used as the basis for regional magnitude determination. Predicted magnitudes from PGA values (Mpga) and predicted magnitudes from PGV values (Mpgv) were defined. Mpga and Mpgv strongly correlate with the moment magnitude of the event, provided sufficient records for each event are available. The results show that Mpgv has a smaller standard deviation in comparison to Mpga when compared with the estimated magnitudes and provides a more accurate early assessment of earthquake magnitude. We test this new method to estimate the magnitude of the 2011 Tohoku earthquake and we present the results of this estimation. PGA and PGV from borehole recordings allow us to estimate the magnitude of this event 156 s and 105 s after the earthquake onset, respectively. We demonstrate that the incorporation of borehole strong ground-motion records immediately available after the occurrence of large earthquakes significantly increases the accuracy of earthquake magnitude estimation and the associated improvement in earthquake and tsunami early warning systems performance. Moment magnitude versus predicted magnitude (Mpga and Mpgv).
Porter, Charlotte A; Bradley, Kevin M; McGowan, Daniel R
2018-05-01
The aim of this study was to verify, with a large dataset of 1394 Cr-EDTA glomerular filtration rate (GFR) studies, the equivalence of slope-intercept and single-sample GFR. Raw data from 1394 patient studies were used to calculate four-sample slope-intercept GFR in addition to four individual single-sample GFR values (blood samples taken at 90, 150, 210 and 270 min after injection). The percentage differences between the four-sample slope-intercept and each of the single-sample GFR values were calculated, to identify the optimum single-sample time point. Having identified the optimum time point, the percentage difference between the slope-intercept and optimal single-sample GFR was calculated across a range of GFR values to investigate whether there was a GFR value below which the two methodologies cannot be considered equivalent. It was found that the lowest percentage difference between slope-intercept and single-sample GFR was for the third blood sample, taken at 210 min after injection. The median percentage difference was 2.5% and only 6.9% of patient studies had a percentage difference greater than 10%. Above a GFR value of 30 ml/min/1.73 m, the median percentage difference between the slope-intercept and optimal single-sample GFR values was below 10%, and so it was concluded that, above this value, the two techniques are sufficiently equivalent. This study supports the recommendation of performing single-sample GFR measurements for GFRs greater than 30 ml/min/1.73 m.
Provably Secure Password-based Authentication in TLS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abdalla, Michel; Emmanuel, Bresson; Chevassut, Olivier
2005-12-20
In this paper, we show how to design an efficient, provably secure password-based authenticated key exchange mechanism specifically for the TLS (Transport Layer Security) protocol. The goal is to provide a technique that allows users to employ (short) passwords to securely identify themselves to servers. As our main contribution, we describe a new password-based technique for user authentication in TLS, called Simple Open Key Exchange (SOKE). Loosely speaking, the SOKE ciphersuites are unauthenticated Diffie-Hellman ciphersuites in which the client's Diffie-Hellman ephemeral public value is encrypted using a simple mask generation function. The mask is simply a constant value raised tomore » the power of (a hash of) the password.The SOKE ciphersuites, in advantage over previous pass-word-based authentication ciphersuites for TLS, combine the following features. First, SOKE has formal security arguments; the proof of security based on the computational Diffie-Hellman assumption is in the random oracle model, and holds for concurrent executions and for arbitrarily large password dictionaries. Second, SOKE is computationally efficient; in particular, it only needs operations in a sufficiently large prime-order subgroup for its Diffie-Hellman computations (no safe primes). Third, SOKE provides good protocol flexibility because the user identity and password are only required once a SOKE ciphersuite has actually been negotiated, and after the server has sent a server identity.« less
Tattini, Lorenzo; Olmi, Simona; Torcini, Alessandro
2012-06-01
In this article, we investigate the role of connectivity in promoting coherent activity in excitatory neural networks. In particular, we would like to understand if the onset of collective oscillations can be related to a minimal average connectivity and how this critical connectivity depends on the number of neurons in the networks. For these purposes, we consider an excitatory random network of leaky integrate-and-fire pulse coupled neurons. The neurons are connected as in a directed Erdös-Renyi graph with average connectivity
NASA Astrophysics Data System (ADS)
Peng, Zhang; Liangfa, Xie; Ming, Wei; Jianli, Li
In the shipbuilding industry, the welding efficiency of the ship plate not only has a great effect on the construction cost of the ship, but also affects the construction speed and determines the delivery cycle. The steel plate used for large heat input welding was developed sufficiently. In this paper, the composition of the steel with a small amount of Nb, Ti and large amount of Mn had been designed in micro-alloyed route. The content of C and the carbon equivalent were also designed to a low level. The technology of oxide metallurgy was used during the smelting process of the steel. The rolling technology of TMCP was controlled at a low rolling temperature and ultra-fast cooling technology was used, for the purpose of controlling the transformation of the microstructure. The microstructure of the steel plate was controlled to be the mixed microstructure of low carbon bainite and ferrite. Large amount of oxide particles dispersed in the microstructure of steel, which had a positive effects on the mechanical property and welding performance of the steel. The mechanical property of the steel plate was excellent and the value of longitudinal Akv at -60 °C is more than 200 J. The toughness of WM and HAZ were excellent after the steel plate was welded with a large heat input of 100-250 kJ/cm. The steel plate processed by mentioned above can meet the requirement of large heat input welding.
An inverse problem for Gibbs fields with hard core potential
NASA Astrophysics Data System (ADS)
Koralov, Leonid
2007-05-01
It is well known that for a regular stable potential of pair interaction and a small value of activity one can define the corresponding Gibbs field (a measure on the space of configurations of points in Rd). In this paper we consider a converse problem. Namely, we show that for a sufficiently small constant ρ¯1 and a sufficiently small function ρ¯2(x), x ∈Rd, that is equal to zero in a neighborhood of the origin, there exist a hard core pair potential and a value of activity such that ρ¯1 is the density and ρ¯2 is the pair correlation function of the corresponding Gibbs field.
Code of Federal Regulations, 2010 CFR
2010-04-01
... equals the product of the present value of the anticipated excess inclusions and the highest rate of tax... sufficient to satisfy the accrued taxes. (3) Computations. The present value of the expected future... formula test if the present value of the anticipated tax liabilities associated with holding the residual...
Code of Federal Regulations, 2011 CFR
2011-04-01
... equals the product of the present value of the anticipated excess inclusions and the highest rate of tax... sufficient to satisfy the accrued taxes. (3) Computations. The present value of the expected future... formula test if the present value of the anticipated tax liabilities associated with holding the residual...
Code of Federal Regulations, 2014 CFR
2014-04-01
... equals the product of the present value of the anticipated excess inclusions and the highest rate of tax... sufficient to satisfy the accrued taxes. (3) Computations. The present value of the expected future... formula test if the present value of the anticipated tax liabilities associated with holding the residual...
Code of Federal Regulations, 2012 CFR
2012-04-01
... equals the product of the present value of the anticipated excess inclusions and the highest rate of tax... sufficient to satisfy the accrued taxes. (3) Computations. The present value of the expected future... formula test if the present value of the anticipated tax liabilities associated with holding the residual...
Code of Federal Regulations, 2013 CFR
2013-04-01
... equals the product of the present value of the anticipated excess inclusions and the highest rate of tax... sufficient to satisfy the accrued taxes. (3) Computations. The present value of the expected future... formula test if the present value of the anticipated tax liabilities associated with holding the residual...
ERIC Educational Resources Information Center
Al-Musawi, Nu'man; Al-Hashem, Abdulla; Karam, Ebraheem
2003-01-01
Explored the role of the colleges of education in developing college students' humanistic values. Surveys of students in Bahrain and Kuwait indicated that colleges of education in the Arab Gulf States really contribute to the formation of human values among college students. However, schools of education are not devoting sufficient time and effort…
Code of Federal Regulations, 2014 CFR
2014-10-01
... accompanied by supporting materials sufficient to calculate required adjustments to each PCI, API, and SBI... that results in an API value that is equal to or less than the applicable PCI value, must be... proposed rates. (d) Each price cap tariff filing that proposes rates that will result in an API value that...
Code of Federal Regulations, 2013 CFR
2013-10-01
... accompanied by supporting materials sufficient to calculate required adjustments to each PCI, API, and SBI... that results in an API value that is equal to or less than the applicable PCI value, must be... proposed rates. (d) Each price cap tariff filing that proposes rates that will result in an API value that...
Code of Federal Regulations, 2012 CFR
2012-10-01
... accompanied by supporting materials sufficient to calculate required adjustments to each PCI, API, and SBI... that results in an API value that is equal to or less than the applicable PCI value, must be... proposed rates. (d) Each price cap tariff filing that proposes rates that will result in an API value that...
Code of Federal Regulations, 2011 CFR
2011-10-01
... accompanied by supporting materials sufficient to calculate required adjustments to each PCI, API, and SBI... that results in an API value that is equal to or less than the applicable PCI value, must be... proposed rates. (d) Each price cap tariff filing that proposes rates that will result in an API value that...
Global Hopf bifurcation analysis on a BAM neural network with delays
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Han, Maoan; Pang, Xiaoming
2007-01-01
A delayed differential equation that models a bidirectional associative memory (BAM) neural network with four neurons is considered. By using a global Hopf bifurcation theorem for FDE and a Bendixon's criterion for high-dimensional ODE, a group of sufficient conditions for the system to have multiple periodic solutions are obtained when the sum of delays is sufficiently large.
A simulation analysis to characterize the dynamics of vaccinating behaviour on contact networks.
Perisic, Ana; Bauch, Chris T
2009-05-28
Human behavior influences infectious disease transmission, and numerous "prevalence-behavior" models have analyzed this interplay. These previous analyses assumed homogeneously mixing populations without spatial or social structure. However, spatial and social heterogeneity are known to significantly impact transmission dynamics and are particularly relevant for certain diseases. Previous work has demonstrated that social contact structure can change the individual incentive to vaccinate, thus enabling eradication of a disease under a voluntary vaccination policy when the corresponding homogeneous mixing model predicts that eradication is impossible due to free rider effects. Here, we extend this work and characterize the range of possible behavior-prevalence dynamics on a network. We simulate transmission of a vaccine-preventable infection through a random, static contact network. Individuals choose whether or not to vaccinate on any given day according to perceived risks of vaccination and infection. We find three possible outcomes for behavior-prevalence dynamics on this type of network: small final number vaccinated and final epidemic size (due to rapid control through voluntary ring vaccination); large final number vaccinated and significant final epidemic size (due to imperfect voluntary ring vaccination), and little or no vaccination and large final epidemic size (corresponding to little or no voluntary ring vaccination). We also show that the social contact structure enables eradication under a broad range of assumptions, except when vaccine risk is sufficiently high, the disease risk is sufficiently low, or individuals vaccinate too late for the vaccine to be effective. For populations where infection can spread only through social contact network, relatively small differences in parameter values relating to perceived risk or vaccination behavior at the individual level can translate into large differences in population-level outcomes such as final size and final number vaccinated. The qualitative outcome of rational, self interested behaviour under a voluntary vaccination policy can vary substantially depending on interactions between social contact structure, perceived vaccine and disease risks, and the way that individual vaccination decision-making is modelled.
A simulation analysis to characterize the dynamics of vaccinating behaviour on contact networks
2009-01-01
Background Human behavior influences infectious disease transmission, and numerous "prevalence-behavior" models have analyzed this interplay. These previous analyses assumed homogeneously mixing populations without spatial or social structure. However, spatial and social heterogeneity are known to significantly impact transmission dynamics and are particularly relevant for certain diseases. Previous work has demonstrated that social contact structure can change the individual incentive to vaccinate, thus enabling eradication of a disease under a voluntary vaccination policy when the corresponding homogeneous mixing model predicts that eradication is impossible due to free rider effects. Here, we extend this work and characterize the range of possible behavior-prevalence dynamics on a network. Methods We simulate transmission of a vaccine-prevetable infection through a random, static contact network. Individuals choose whether or not to vaccinate on any given day according to perceived risks of vaccination and infection. Results We find three possible outcomes for behavior-prevalence dynamics on this type of network: small final number vaccinated and final epidemic size (due to rapid control through voluntary ring vaccination); large final number vaccinated and significant final epidemic size (due to imperfect voluntary ring vaccination), and little or no vaccination and large final epidemic size (corresponding to little or no voluntary ring vaccination). We also show that the social contact structure enables eradication under a broad range of assumptions, except when vaccine risk is sufficiently high, the disease risk is sufficiently low, or individuals vaccinate too late for the vaccine to be effective. Conclusion For populations where infection can spread only through social contact network, relatively small differences in parameter values relating to perceived risk or vaccination behavior at the individual level can translate into large differences in population-level outcomes such as final size and final number vaccinated. The qualitative outcome of rational, self interested behaviour under a voluntary vaccination policy can vary substantially depending on interactions between social contact structure, perceived vaccine and disease risks, and the way that individual vaccination decision-making is modelled. PMID:19476616
Space Geodesy and the New Madrid Seismic Zone
NASA Astrophysics Data System (ADS)
Smalley, Robert; Ellis, Michael A.
2008-07-01
One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.
Feasibility of generating an artificial burst in a turbulent boundary layer, phase 2
NASA Technical Reports Server (NTRS)
Gad-El-hak, Mohamed
1989-01-01
Various drag accounts for about half of the total drag on commercial aircraft at subsonic cruise conditions. Two avenues are available to achieve drag reduction: either laminar flow control or turbulence manipulation. The present research deals with the latter approach. The primary objective of Phase 2 research was to investigate experimentally the feasibility of substantially reducing the skin-friction drag in a turbulent boundary layer. The method combines the beneficial effects of suction and a longitudinally ribbed surface. At a sufficiently large spanwise separation, the streamwise grooves act as a nucleation site causing a focusing of low-speed streaks over the peaks. Suction is then applied intermittently through longitudinal slots located at selected locations along those peaks to obliterate the low-speed regions and to prevent bursting. Phase 2 research was divided into two tasks. In the first, selective suction from a single streamwise slot was used to eliminate either a single burst-like event or a periodic train of artificially generated bursts in laminar and turbulent boundary layers that develop on a flat plate towed in a water channel. The results indicate that equivalent values of the suction coefficient as low as 0.0006 were sufficient to eliminate the artificially generated bursts in a laminar boundary layer.
Determination of the dissipation in superconducting Josephson junctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mugnai, D., E-mail: d.mugnai@ifac.cnr.it; Ranfagni, A.; Cacciari, I.
2015-02-07
The results relative to macroscopic quantum tunneling rate, out of the metastable state of Josephson junctions, are examined in view of determining the effect of dissipation. We adopt a simple criterion in accordance to which the effect of dissipation can be evaluated by analyzing the shortening of the semiclassical traversal time of the barrier. In almost all the considered cases, especially those with relatively large capacitance values, the relative time shortening turns out to be about 20% and with a corresponding quality factor Q ≃ 5.5. However, beyond the specific cases here considered, still in the regime of moderate dissipation,more » the method is applicable also to different situations with different values of the quality factor. The method allows, within the error limits, for a reliable determination of the load resistance R{sub L}, the less accessible quantity in the framework of the resistively and capacitively shunted junction model, provided that the characteristics of the junction (intrinsic capacitance, critical current, and the ratio of the bias current to the critical one) are known with sufficient accuracy.« less
Movement Activity Determination with Health-related Variables of University Students in Kosice.
Bakalár, Peter; Zvonar, Martin; Sedlacek, Jaromir; Lenkova, Rut; Sagat, Peter; Vojtasko, Lubos; Liptakova, Erika; Barcalova, Miroslava
2018-06-01
There is currently a strong scientific evidence about the negative health consequences of physical inactivity. One of the potential tools for promoting physical activity at the institutional level of the Ecological model is to create conditions and settings that would enable pupils, students and employees engage in some form of physical activity. However, physical activities as a subject are being eliminated from the study programs at Slovak universities. The purpose of the study was to find current evidence about the level of structured physical activity and health-related variables in university students in Košice. The sample consisted of 1,993 or, more precisely, 1,398 students who attended two universities in Košice. To collect data, students completed a questionnaire and were tested for body height, body weight, circumferential measures and percentage body fat. The university students did not sufficiently engage in a structured physical activity. A large number of students had either low or high values of percentage body fat and BMI and high WHR values. Our findings have shown that the research into physical activity of university students should receive more attention.
Modeling multilayer x-ray reflectivity using genetic algorithms
NASA Astrophysics Data System (ADS)
Sánchez del Río, M.; Pareschi, G.; Michetschläger, C.
2000-06-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thickness, density, roughness). Non-linear fitting of experimental data with simulations requires the use of initial values sufficiently close to the optimum value. This is a difficult task when the topology of the space of the variables is highly structured. We apply global optimization methods to fit multilayer reflectivity. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (selection, crossover, mutation, etc.) on the members of the parent generation. The pressure of selection drives the population to include "good" individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C and W/Si multilayers using genetic algorithms are presented. This method can also be applied to design multilayers optimized for a target application.
NASA Astrophysics Data System (ADS)
Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.; Morel, Jim E.
2009-09-01
The Fokker-Planck equation is a widely used approximation for modeling the Compton scattering of photons in high energy density applications. In this paper, we perform a stability analysis of three implicit time discretizations for the Compton-Scattering Fokker-Planck equation. Specifically, we examine (i) a Semi-Implicit (SI) scheme that employs backward-Euler differencing but evaluates temperature-dependent coefficients at their beginning-of-time-step values, (ii) a Fully Implicit (FI) discretization that instead evaluates temperature-dependent coefficients at their end-of-time-step values, and (iii) a Linearized Implicit (LI) scheme, which is developed by linearizing the temperature dependence of the FI discretization within each time step. Our stability analysis shows that the FI and LI schemes are unconditionally stable and cannot generate oscillatory solutions regardless of time-step size, whereas the SI discretization can suffer from instabilities and nonphysical oscillations for sufficiently large time steps. With the results of this analysis, we present time-step limits for the SI scheme that prevent undesirable behavior. We test the validity of our stability analysis and time-step limits with a set of numerical examples.
Attached flow structure and streamwise energy spectra in a turbulent boundary layer
NASA Astrophysics Data System (ADS)
Srinath, S.; Vassilicos, J. C.; Cuvier, C.; Laval, J.-P.; Stanislas, M.; Foucaut, J.-M.
2018-05-01
On the basis of (i) particle image velocimetry data of a turbulent boundary layer with large field of view and good spatial resolution and (ii) a mathematical relation between the energy spectrum and specifically modeled flow structures, we show that the scalings of the streamwise energy spectrum E11(kx) in a wave-number range directly affected by the wall are determined by wall-attached eddies but are not given by the Townsend-Perry attached eddy model's prediction of these spectra, at least at the Reynolds numbers Reτ considered here which are between 103 and 104. Instead, we find E11(kx) ˜kx-1 -p where p varies smoothly with distance to the wall from negative values in the buffer layer to positive values in the inertial layer. The exponent p characterizes the turbulence levels inside wall-attached streaky structures conditional on the length of these structures. A particular consequence is that the skin friction velocity is not sufficient to scale E11(kx) for wave numbers directly affected by the wall.
Svetlichny's inequality and genuine tripartite nonlocality in three-qubit pure states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajoy, Ashok; NMR Research Centre, Indian Institute of Science, Bangalore 560012; Rungta, Pranaw
2010-05-15
The violation of the Svetlichny's inequality (SI) [Phys. Rev. D 35, 3066 (1987)] is sufficient but not necessary for genuine tripartite nonlocal correlations. Here we quantify the relationship between tripartite entanglement and the maximum expectation value of the Svetlichny operator (which is bounded from above by the inequality) for the two inequivalent subclasses of pure three-qubit states: the Greenberger-Horne-Zeilinger (GHZ) class and the W class. We show that the maximum for the GHZ-class states reduces to Mermin's inequality [Phys. Rev. Lett. 65, 1838 (1990)] modulo a constant factor, and although it is a function of the three tangle and themore » residual concurrence, large numbers of states do not violate the inequality. We further show that by design SI is more suitable as a measure of genuine tripartite nonlocality between the three qubits in the W-class states, and the maximum is a certain function of the bipartite entanglement (the concurrence) of the three reduced states, and only when their sum attains a certain threshold value do they violate the inequality.« less
Delatour, Vincent; Lalere, Beatrice; Saint-Albin, Karène; Peignaux, Maryline; Hattchouel, Jean-Marc; Dumont, Gilles; De Graeve, Jacques; Vaslin-Reimann, Sophie; Gillery, Philippe
2012-11-20
The reliability of biological tests is a major issue for patient care in terms of public health that involves high economic stakes. Reference methods, as well as regular external quality assessment schemes (EQAS), are needed to monitor the analytical performance of field methods. However, control material commutability is a major concern to assess method accuracy. To overcome material non-commutability, we investigated the possibility of using lyophilized serum samples together with a limited number of frozen serum samples to assign matrix-corrected target values, taking the example of glucose assays. Trueness of the current glucose assays was first measured against a primary reference method by using human frozen sera. Methods using hexokinase and glucose oxidase with spectroreflectometric detection proved very accurate, with bias ranging between -2.2% and +2.3%. Bias of methods using glucose oxidase with spectrophotometric detection was +4.5%. Matrix-related bias of the lyophilized materials was then determined and ranged from +2.5% to -14.4%. Matrix-corrected target values were assigned and used to assess trueness of 22 sub-peer groups. We demonstrated that matrix-corrected target values can be a valuable tool to assess field method accuracy in large scale surveys where commutable materials are not available in sufficient amount with acceptable costs. Copyright © 2012 Elsevier B.V. All rights reserved.
A novel method for measuring polymer-water partition coefficients.
Zhu, Tengyi; Jafvert, Chad T; Fu, Dafang; Hu, Yue
2015-11-01
Low density polyethylene (LDPE) often is used as the sorbent material in passive sampling devices to estimate the average temporal chemical concentration in water bodies or sediment pore water. To calculate water phase chemical concentrations from LDPE concentrations accurately, it is necessary to know the LDPE-water partition coefficients (KPE-w) of the chemicals of interest. However, even moderately hydrophobic chemicals have large KPE-w values, making direct measurement experimentally difficult. In this study we evaluated a simple three phase system from which KPE-w can be determined easily and accurately. In the method, chemical equilibrium distribution between LDPE and a surfactant micelle pseudo-phase is measured, with the ratio of these concentrations equal to the LDPE-micelle partition coefficient (KPE-mic). By employing sufficient mass of polymer and surfactant (Brij 30), the mass of chemical in the water phase remains negligible, albeit in equilibrium. In parallel, the micelle-water partition coefficient (Kmic-w) is determined experimentally. KPE-w is the product of KPE-mic and Kmic-w. The method was applied to measure values of KPE-w for 17 polycyclic aromatic hydrocarbons, 37 polychlorinated biphenyls, and 9 polybrominated diphenylethers. These values were compared to literature values. Mass fraction-based chemical activity coefficients (γ) were determined in each phase and showed that for each chemical, the micelles and LDPE had nearly identical affinity. Copyright © 2014 Elsevier Ltd. All rights reserved.
GENERAL: Entanglement sudden death induced by the Dzialoshinskii-Moriya interaction
NASA Astrophysics Data System (ADS)
Zeng, Hong-Fang; Shao, Bin; Yang, Lin-Guang; Li, Jian; Zou, Jian
2009-08-01
In this paper, we study the entanglement dynamics of two-spin Heisenberg XYZ model with the Dzialoshinskii-Moriya (DM) interaction. The system is initially prepared in the Werner state. The effects of purity of the initial state and DM coupling parameter on the evolution of entanglement are investigated. The necessary and sufficient condition for the appearance of the entanglement sudden death (ESD) phenomenon has been deduced. The result shows that the ESD always occurs if the initial state is sufficiently impure for the given coupling parameter or the DM interaction is sufficiently strong for the given initial state. Moreover, the critical values of them are calculated.
Guiavarc'h, Yann P; van Loey, Ann M; Hendrickx, Marc E
2005-02-01
The possibilities and limitations of single- and multicomponent time-temperature integrators (TTIs) for evaluating the impact of thermal processes on a target food attribute with a Ztarget value different from the zTTI value(s) of the TTI is far from sufficiently documented. In this study, several thousand time-temperature profiles were generated by heat transfer simulations based on a wide range of product and process thermal parameters and considering a Ztarget value of 10 degrees C and a reference temperature of 121.1 degrees C, both currently used to assess the safety of food sterilization processes. These simulations included 15 different Ztarget=10 degrees CF121.1 degrees C values in the range 3 to 60 min. The integration of the time-temperature profiles with ZTTI values of 5.5 to 20.5 degrees C in steps of 1 degrees C allowed generation of a large database containing for each combination of product and process parameters the correction factor to apply to the process value FmultiTTI, which was derived from a single- or multicomponent TTI, to obtain the target process value 10 degrees CF121.1 degrees C. The table and the graph results clearly demonstrated that multicomponent TTIs with z-values close to 10 degrees C can be used as an extremely efficient approach when a single-component TTI with a z-value of 10 degrees C is not available. In particular, a two-component TTI with z1 and z2 values respectively above and below the Ztarget value (10 degrees C in this study) would be the best option for the development of a TTI to assess the safety of sterilized foods. Whatever process and product parameters are used, such a TTI allows proper evaluation of the process value 10 degrees CF121.1 degrees C.
NASA Astrophysics Data System (ADS)
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
36 CFR 1210.23 - Cost sharing or matching.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... 1210.23 Section 1210.23 Parks, Forests, and Public Property NATIONAL ARCHIVES AND RECORDS... records at the time of donation. (2) The current fair market value. However, when there is sufficient justification, the NHPRC may approve the use of the current fair market value of the donated property, even if...
Best Practices Inquiry: A Multidimensional, Value-Critical Framework
ERIC Educational Resources Information Center
Petr, Christopher G.; Walter, Uta M.
2005-01-01
This article offers a multidimensional framework that broadens current approaches to "best practices" inquiry to include (1) the perspectives of both the consumers of services and professional practitioners and (2) a value-based critique. The predominant empirical approach to best practices inquiry is a necessary, but not sufficient, component of…
Changes in the Values and Life Style Preferences of University Students.
ERIC Educational Resources Information Center
Thompson, Kenrick S.
1981-01-01
The values and life-style preferences of 1978-79 university students are compared with those studied 30 years ago to determine whether university students' preferential rankings of C. Morris's "Ways to Live" (self-sufficient, carefree, etc.) have changed significantly. Possible reasons for differences between the generations are…
Native Values Take Root in Plains Soil.
ERIC Educational Resources Information Center
Merritt, Judy
1993-01-01
Describes a cooperative organic gardening program between Oglala Lakota College (South Dakota) and the University of Bonn (Germany) that is being developed into a two-year Associate Degree in organic agriculture. The program combines traditional values and scientific knowledge with the goal of promoting self-sufficiency and a healthier lifestyle…
Complex extreme learning machine applications in terahertz pulsed signals feature sets.
Yin, X-X; Hadjiloucas, S; Zhang, Y
2014-11-01
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozen, C.; Norris, Adrianne; Land, Miriam L
2008-01-01
This work describes differential effects of solvent in complexes of the aminoglycoside phosphotransferase(3¢)-IIIa (APH) with different aminoglycosides and the detection of change in solvent structure at specific sites away from substrates. Binding of kanamycins to APH occurs with a larger negative ¢H in H2O relative to D2O (¢¢H(H2O-D2O) < 0), while the reverse is true for neomycins. Unusually large negative ¢Cp values were observed for binding of aminoglycosides to APH. ¢Cp for the APHneomycin complex was -1.6 kcalâmol-1âdeg-1. A break at 30 C was observed in the APH-kanamycin complex yielding ¢Cp values of -0.7 kcalâmol-1âdeg-1 and -3.8 kcalâmol-1âdeg-1 below andmore » above 30 C, respectively. Neither the change in accessible surface area (¢ASA) nor contributions from heats of ionization were sufficient to explain the large negative ¢Cp values. Most significantly, 15N-1H HSQC experiments showed that temperature-dependent shifts of the backbone amide protons of Leu 88, Ser 91, Cys 98, and Leu143 revealed a break at 30 C only in the APH-kanamycin complex in spectra collected between 21 C and 38 C. These amino acids represent solVent reorganization sites that experience a change in solvent structure in their immediate environment as structurally different ligands bind to the enzyme. These residues were away from the substrate binding site and distributed in three hydrophobic patches in APH. Overall, our results show that a large number of factors affect ¢Cp and binding of structurally different ligand groups cause different solvent structure in the active site as well as differentially affecting specific sites away from the ligand binding site.« less
NASA Astrophysics Data System (ADS)
Zheng, Qin; Yang, Zubin; Sha, Jianxin; Yan, Jun
2017-02-01
In predictability problem research, the conditional nonlinear optimal perturbation (CNOP) describes the initial perturbation that satisfies a certain constraint condition and causes the largest prediction error at the prediction time. The CNOP has been successfully applied in estimation of the lower bound of maximum predictable time (LBMPT). Generally, CNOPs are calculated by a gradient descent algorithm based on the adjoint model, which is called ADJ-CNOP. This study, through the two-dimensional Ikeda model, investigates the impacts of the nonlinearity on ADJ-CNOP and the corresponding precision problems when using ADJ-CNOP to estimate the LBMPT. Our conclusions are that (1) when the initial perturbation is large or the prediction time is long, the strong nonlinearity of the dynamical model in the prediction variable will lead to failure of the ADJ-CNOP method, and (2) when the objective function has multiple extreme values, ADJ-CNOP has a large probability of producing local CNOPs, hence making a false estimation of the LBMPT. Furthermore, the particle swarm optimization (PSO) algorithm, one kind of intelligent algorithm, is introduced to solve this problem. The method using PSO to compute CNOP is called PSO-CNOP. The results of numerical experiments show that even with a large initial perturbation and long prediction time, or when the objective function has multiple extreme values, PSO-CNOP can always obtain the global CNOP. Since the PSO algorithm is a heuristic search algorithm based on the population, it can overcome the impact of nonlinearity and the disturbance from multiple extremes of the objective function. In addition, to check the estimation accuracy of the LBMPT presented by PSO-CNOP and ADJ-CNOP, we partition the constraint domain of initial perturbations into sufficiently fine grid meshes and take the LBMPT obtained by the filtering method as a benchmark. The result shows that the estimation presented by PSO-CNOP is closer to the true value than the one by ADJ-CNOP with the forecast time increasing.
Scaling of normalized mean energy and scalar dissipation rates in a turbulent channel flow
NASA Astrophysics Data System (ADS)
Abe, Hiroyuki; Antonia, Robert Anthony
2011-05-01
Non-dimensional parameters for the mean energy and scalar dissipation rates Cɛ and Cɛθ are examined using direct numerical simulation (DNS) data obtained in a fully developed turbulent channel flow with a passive scalar (Pr = 0.71) at several values of the Kármán (Reynolds) number h+. It is shown that Cɛ and Cɛθ are approximately equal in the near-equilibrium region (viz., y+ = 100 to y/h = 0.7) where the production and dissipation rates of either the turbulent kinetic energy or scalar variance are approximately equal and the magnitudes of the diffusion terms are negligibly small. The magnitudes of Cɛ and Cɛθ are about 2 and 1 in the logarithmic and outer regions, respectively, when h+ is sufficiently large. The former value is about the same for the channel, pipe, and turbulent boundary layer, reflecting the similarity between the mean velocity and temperature distributions among these three canonical flows. The latter value is, on the other hand, about twice as large as in homogeneous isotropic turbulence due to the existence of the large-scale u structures in the channel. The behaviour of Cɛ and Cɛθ impacts on turbulence modeling. In particular, the similarity between Cɛ and Cɛθ leads to a simple relation for the scalar variance to turbulent kinetic energy time-scale ratio, an important ingredient in the eddy diffusivity model. This similarity also yields a relation between the Taylor and Corrsin microscales and analogous relations, in terms of h+, for the Taylor microscale Reynolds number and Corrsin microscale Peclet number. This dependence is reasonably well supported by both the DNS data at small to moderate h+ and the experimental data of Comte-Bellot [Ph. D. thesis (University of Grenoble, 1963)] at larger h+. It does not however apply to a turbulent boundary layer where the mean energy dissipation rate, normalized on either wall or outer variables, is about 30% larger than for the channel flow.
Economic Assessment of Hydrogen Technologies Participating in California Electricity Markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eichman, Joshua; Townsend, Aaron; Melaina, Marc
As the electric sector evolves and increasing amounts of variable renewable generation are installed on the system, there are greater needs for system flexibility and sufficient capacity, and greater concern for overgeneration from renewable sources not well matched in time with electric loads. Hydrogen systems have the potential to support the grid in each of these areas. However, limited information is available about the economic competitiveness of hydrogen system configurations. This paper quantifies the value for hydrogen energy storage and demand response systems to participate in select California wholesale electricity markets using 2012 data. For hydrogen systems and conventional storagemore » systems (e.g., pumped hydro, batteries), the yearly revenues from energy, ancillary service, and capacity markets are compared to the yearly cost to establish economic competitiveness. Hydrogen systems can present a positive value proposition for current markets. Three main findings include: (1) For hydrogen systems participating in California electricity markets, producing and selling hydrogen was found to be much more valuable than producing and storing hydrogen to later produce electricity; therefore systems should focus on producing and selling hydrogen and opportunistically providing ancillary services and arbitrage. (2) Tighter integration with electricity markets generates greater revenues (i.e., systems that participate in multiple markets receive the highest revenue). (3) More storage capacity, in excess of what is required to provide diurnal shifting, does not increase competitiveness in current California wholesale energy markets. As more variable renewable generation is installed, the importance of long duration storage may become apparent in the energy price or through additional markets, but currently, there is not a sufficiently large price differential between days to generate enough revenue to offset the cost of additional storage. Future work will involve expanding to consider later year data and multiple regions to establish more generalized results.« less
Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan
2015-09-01
To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Power-law expansion of the Universe from the bosonic Lorentzian type IIB matrix model
NASA Astrophysics Data System (ADS)
Ito, Yuta; Nishimura, Jun; Tsuchiya, Asato
2015-11-01
Recent studies on the Lorentzian version of the type IIB matrix model show that (3+1)D expanding universe emerges dynamically from (9+1)D space-time predicted by superstring theory. Here we study a bosonic matrix model obtained by omitting the fermionic matrices. With the adopted simplification and the usage of a large-scale parallel computer, we are able to perform Monte Carlo calculations with matrix size up to N = 512, which is twenty times larger than that used previously for the studies of the original model. When the matrix size is larger than some critical value N c ≃ 110, we find that (3+1)D expanding universe emerges dynamically with a clear large- N scaling property. Furthermore, the observed increase of the spatial extent with time t at sufficiently late times is consistent with a power-law behavior t 1/2, which is reminiscent of the expanding behavior of the Friedmann-Robertson-Walker universe in the radiation dominated era. We discuss possible implications of this result on the original supersymmetric model including fermionic matrices.
Heatley, M
2001-01-01
Aims—To establish the value of examining additional histological levels in cone biopsy and large loop excision of the transformation zone (LLETZ) specimens of cervix. Methods—Three deeper levels were examined from 200 consecutive cone biopsy and LLETZ specimens reported by a single pathologist. Results—Examination of the first deeper level resulted in cervical intraepithelial neoplasia (CIN) being identified for the first time in five cases and in CIN1 being upgraded in five more. Invasive cancer was discovered in two cases that had shown high grade CIN initially. Conclusion—Examination of a single further level appears to be sufficient in those patients in whom a specimen is compromised because epithelium including the squamocolumnar junction is missing, or if there is a discrepancy between the histological findings and the preceding colposcopic or cytological history. If invasive disease is suspected on the basis of the cytological, colposcopic, or histological features, one or preferably two further levels should be examined. Key Words: cervix uteri • quality control • diagnosis PMID:11477125
Ferromagnetic glass-coated microwires with good heating properties for magnetic hyperthermia
Talaat, A.; Alonso, J.; Zhukova, V.; ...
2016-12-01
The heating properties of Fe 71.7Si 11B 13.4Nb 3Ni 0.9 amorphous glass-coated microwires are explored for prospective applications in magnetic hyperthermia. We show that a single 5 mm long wire is able to produce a sufficient amount of heat, with the specific loss power (SLP) reaching a value as high as 521 W/g for an AC field of 700 Oe and a frequency of 310 kHz. The large SLP is attributed to the rectangular hysteresis loop resulting from a peculiar domain structure of the microwire. For an array of parallel microwires, we have observed an SLP improvement by one ordermore » of magnitude; 950 W/g for an AC field of 700 Oe. The magnetostatic interaction strength essential in the array of wires can be manipulated by varying the distance between the wires, showing a decreasing trend in SLP with increasing wire separation. The largest SLP is obtained when the wires are aligned along the direction of the AC field. The origin of the large SLP and relevant heating mechanisms are discussed.« less
Ferromagnetic glass-coated microwires with good heating properties for magnetic hyperthermia
NASA Astrophysics Data System (ADS)
Talaat, A.; Alonso, J.; Zhukova, V.; Garaio, E.; García, J. A.; Srikanth, H.; Phan, M. H.; Zhukov, A.
2016-12-01
The heating properties of Fe71.7Si11B13.4Nb3Ni0.9 amorphous glass-coated microwires are explored for prospective applications in magnetic hyperthermia. We show that a single 5 mm long wire is able to produce a sufficient amount of heat, with the specific loss power (SLP) reaching a value as high as 521 W/g for an AC field of 700 Oe and a frequency of 310 kHz. The large SLP is attributed to the rectangular hysteresis loop resulting from a peculiar domain structure of the microwire. For an array of parallel microwires, we have observed an SLP improvement by one order of magnitude; 950 W/g for an AC field of 700 Oe. The magnetostatic interaction strength essential in the array of wires can be manipulated by varying the distance between the wires, showing a decreasing trend in SLP with increasing wire separation. The largest SLP is obtained when the wires are aligned along the direction of the AC field. The origin of the large SLP and relevant heating mechanisms are discussed.
Ferromagnetic glass-coated microwires with good heating properties for magnetic hyperthermia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talaat, A.; Alonso, J.; Zhukova, V.
The heating properties of Fe 71.7Si 11B 13.4Nb 3Ni 0.9 amorphous glass-coated microwires are explored for prospective applications in magnetic hyperthermia. We show that a single 5 mm long wire is able to produce a sufficient amount of heat, with the specific loss power (SLP) reaching a value as high as 521 W/g for an AC field of 700 Oe and a frequency of 310 kHz. The large SLP is attributed to the rectangular hysteresis loop resulting from a peculiar domain structure of the microwire. For an array of parallel microwires, we have observed an SLP improvement by one ordermore » of magnitude; 950 W/g for an AC field of 700 Oe. The magnetostatic interaction strength essential in the array of wires can be manipulated by varying the distance between the wires, showing a decreasing trend in SLP with increasing wire separation. The largest SLP is obtained when the wires are aligned along the direction of the AC field. The origin of the large SLP and relevant heating mechanisms are discussed.« less
SHORT-WAVELENGTH MAGNETIC BUOYANCY INSTABILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizerski, K. A.; Davies, C. R.; Hughes, D. W., E-mail: kamiz@igf.edu.pl, E-mail: tina@maths.leeds.ac.uk, E-mail: d.w.hughes@leeds.ac.uk
2013-04-01
Magnetic buoyancy instability plays an important role in the evolution of astrophysical magnetic fields. Here we revisit the problem introduced by Gilman of the short-wavelength linear stability of a plane layer of compressible isothermal fluid permeated by a horizontal magnetic field of strength decreasing with height. Dissipation of momentum and magnetic field is neglected. By the use of a Rayleigh-Schroedinger perturbation analysis, we explain in detail the limit in which the transverse horizontal wavenumber of the perturbation, denoted by k, is large (i.e., short horizontal wavelength) and show that the fastest growing perturbations become localized in the vertical direction asmore » k is increased. The growth rates are determined by a function of the vertical coordinate z since, in the large k limit, the eigenmodes are strongly localized in the vertical direction. We consider in detail the case of two-dimensional perturbations varying in the directions perpendicular to the magnetic field, which, for sufficiently strong field gradients, are the most unstable. The results of our analysis are backed up by comparison with a series of initial value problems. Finally, we extend the analysis to three-dimensional perturbations.« less
Ferromagnetic glass-coated microwires with good heating properties for magnetic hyperthermia
Talaat, A.; Alonso, J.; Zhukova, V.; Garaio, E.; García, J. A.; Srikanth, H.; Phan, M. H.; Zhukov, A.
2016-01-01
The heating properties of Fe71.7Si11B13.4Nb3Ni0.9 amorphous glass-coated microwires are explored for prospective applications in magnetic hyperthermia. We show that a single 5 mm long wire is able to produce a sufficient amount of heat, with the specific loss power (SLP) reaching a value as high as 521 W/g for an AC field of 700 Oe and a frequency of 310 kHz. The large SLP is attributed to the rectangular hysteresis loop resulting from a peculiar domain structure of the microwire. For an array of parallel microwires, we have observed an SLP improvement by one order of magnitude; 950 W/g for an AC field of 700 Oe. The magnetostatic interaction strength essential in the array of wires can be manipulated by varying the distance between the wires, showing a decreasing trend in SLP with increasing wire separation. The largest SLP is obtained when the wires are aligned along the direction of the AC field. The origin of the large SLP and relevant heating mechanisms are discussed. PMID:27991557
On the influence of additive and multiplicative noise on holes in dissipative systems.
Descalzi, Orazio; Cartes, Carlos; Brand, Helmut R
2017-05-01
We investigate the influence of noise on deterministically stable holes in the cubic-quintic complex Ginzburg-Landau equation. Inspired by experimental possibilities, we specifically study two types of noise: additive noise delta-correlated in space and spatially homogeneous multiplicative noise on the formation of π-holes and 2π-holes. Our results include the following main features. For large enough additive noise, we always find a transition to the noisy version of the spatially homogeneous finite amplitude solution, while for sufficiently large multiplicative noise, a collapse occurs to the zero amplitude solution. The latter type of behavior, while unexpected deterministically, can be traced back to a characteristic feature of multiplicative noise; the zero solution acts as the analogue of an absorbing boundary: once trapped at zero, the system cannot escape. For 2π-holes, which exist deterministically over a fairly small range of values of subcriticality, one can induce a transition to a π-hole (for additive noise) or to a noise-sustained pulse (for multiplicative noise). This observation opens the possibility of noise-induced switching back and forth from and to 2π-holes.
Multilevel Methods for Elliptic Problems with Highly Varying Coefficients on Nonaligned Coarse Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheichl, Robert; Vassilevski, Panayot S.; Zikatanov, Ludmil T.
2012-06-21
We generalize the analysis of classical multigrid and two-level overlapping Schwarz methods for 2nd order elliptic boundary value problems to problems with large discontinuities in the coefficients that are not resolved by the coarse grids or the subdomain partition. The theoretical results provide a recipe for designing hierarchies of standard piecewise linear coarse spaces such that the multigrid convergence rate and the condition number of the Schwarz preconditioned system do not depend on the coefficient variation or on any mesh parameters. One assumption we have to make is that the coarse grids are sufficiently fine in the vicinity of crossmore » points or where regions with large diffusion coefficients are separated by a narrow region where the coefficient is small. We do not need to align them with possible discontinuities in the coefficients. The proofs make use of novel stable splittings based on weighted quasi-interpolants and weighted Poincaré-type inequalities. Finally, numerical experiments are included that illustrate the sharpness of the theoretical bounds and the necessity of the technical assumptions.« less
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbielini, G; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2012-01-01
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron- plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between approx. 6 and approx. 13 GeV with an estimated uncertainty of approx. 2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in themore » Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.« less
ERIC Educational Resources Information Center
Spencer, Bryden
2016-01-01
Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…
DEVELOPMENT OF STANDARDIZED LARGE RIVER BIOASSESSMENT PROTOCOLS (LR-BP) FOR FISH ASSEMBLAGES
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages for large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable ri...
The Continuum of Health Professions
Jensen, Clyde B.
2015-01-01
The large number of health care professions with overlapping scopes of practice is intimidating to students, confusing to patients, and frustrating to policymakers. As abundant and diverse as the hundreds of health care professions are, they possess sufficient numbers of common characteristics to warrant their placement on a common continuum of health professions that permits methodical comparisons. From 2009–2012, the author developed and delivered experimental courses at 2 community colleges for the purposes of creating and validating a novel method for comparing health care professions. This paper describes the bidirectional health professions continuum that emerged from these courses and its potential value in helping students select a health care career, motivating health care providers to seek interprofessional collaboration, assisting patients with the selection of health care providers, and helping policymakers to better understand the health care professions they regulate. PMID:26770147
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2010-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points--the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.
Ballard, Andrew; Ahmad, Hiwa O.; Narduolo, Stefania; Rosa, Lucy; Chand, Nikki; Cosgrove, David A.; Varkonyi, Peter; Asaad, Nabil; Tomasi, Simone
2017-01-01
Abstract Racemization has a large impact upon the biological properties of molecules but the chemical scope of compounds with known rate constants for racemization in aqueous conditions was hitherto limited. To address this remarkable blind spot, we have measured the kinetics for racemization of 28 compounds using circular dichroism and 1H NMR spectroscopy. We show that rate constants for racemization (measured by ourselves and others) correlate well with deprotonation energies from quantum mechanical (QM) and group contribution calculations. Such calculations thus provide predictions of the second‐order rate constants for general‐base‐catalyzed racemization that are usefully accurate. When applied to recent publications describing the stereoselective synthesis of compounds of purported biological value, the calculations reveal that racemization would be sufficiently fast to render these expensive syntheses pointless. PMID:29072355
Dark matter and cosmological nucleosynthesis
NASA Technical Reports Server (NTRS)
Schramm, D. N.
1986-01-01
Existing dark matter problems, i.e., dynamics, galaxy formation and inflation, are considered, along with a model which proposes dark baryons as the bulk of missing matter in a fractal universe. It is shown that no combination of dark, nonbaryonic matter can either provide a cosmological density parameter value near unity or, as in the case of high energy neutrinos, allow formation of condensed matter at epochs when quasars already existed. The possibility that correlations among galactic clusters are scale-free is discussed. Such a distribution of matter would yield a fractal of 1.2, close to a one-dimensional universe. Biasing, cosmic superstrings, and percolated explosions and hot dark matter are theoretical approaches that would satisfy the D = 1.2 fractal model of the large-scale structure of the universe and which would also allow sufficient dark matter in halos to close the universe.
The mass media destabilizes the cultural homogenous regime in Axelrod's model
NASA Astrophysics Data System (ADS)
Peres, Lucas R.; Fontanari, José F.
2010-02-01
An important feature of Axelrod's model for culture dissemination or social influence is the emergence of many multicultural absorbing states, despite the fact that the local rules that specify the agents interactions are explicitly designed to decrease the cultural differences between agents. Here we re-examine the problem of introducing an external, global interaction—the mass media—in the rules of Axelrod's model: in addition to their nearest neighbors, each agent has a certain probability p to interact with a virtual neighbor whose cultural features are fixed from the outset. Most surprisingly, this apparently homogenizing effect actually increases the cultural diversity of the population. We show that, contrary to previous claims in the literature, even a vanishingly small value of p is sufficient to destabilize the homogeneous regime for very large lattice sizes.
NASA Astrophysics Data System (ADS)
Ezelle, Ralph Wayne, Jr.
2011-12-01
This study examines auditing of energy firms prior and post Sarbanes Oxley Act of 2002. The research explores factors impacting the asset adjusted audit fee of oil and gas companies and specifically examines the effect of the Sarbanes Oxley Act. This research analyzes multiple year audit fees of the firms engaged in the oil and gas industry. Pooled samples were created to improve statistical power with sample sizes sufficient to test for medium and large effect size. The Sarbanes Oxley Act significantly increases a firm's asset adjusted audit fees. Additional findings are that part of the variance in audit fees was attributable to the market value of the enterprise, the number of subsidiaries, the receivables and inventory, debt ratio, non-profitability, and receipt of a going concern report.
Frontier areas and resource assessment: case of the 1002 area of the Alaska North Slope
Attanasi, E.D.; Schuenemeyer, John H.
2002-01-01
The U.S. Geological Survey's 1998 assessment of the 1002 Area of the Arctic National Wildlife Refuge significantly revised previous estimates of the area's petroleum supply potential. The mean (or expected) value of technically recoverable undiscovered oil for the Study Area (Federal 1002 Area, adjacent State waters, and Native Lands) is estimated at 10.4 billion barrels of oil (BBO) and for the Federal 1002 Area the mean is 7.7 BBO. Accumulation sizes containing the oil are expected to be sufficiently large to be of economic interest. At a market price of $21 per barrel, 6 BBO of oil in the Study area is expected to be economic. The Assessment's methodology, results, and the reasons for the significant change in assessments are reviewed. In the concluding section, policy issues raised by the assessment are discussed.
NASA Technical Reports Server (NTRS)
Gordon, H. R.
1979-01-01
The radiative transfer equation is modified to include the effect of fluorescent substances and solved in the quasi-single scattering approximation for a homogeneous ocean containing fluorescent particles with wavelength independent quantum efficiency and a Gaussian shaped emission line. The results are applied to the in vivo fluorescence of chlorophyll a (in phytoplankton) in the ocean to determine if the observed quantum efficiencies are large enough to explain the enhancement of the ocean's diffuse reflectance near 685 nm in chlorophyll rich waters without resorting to anomalous dispersion. The computations indicate that the required efficiencies are sufficiently low to account completely for the enhanced reflectance. The validity of the theory is further demonstrated by deriving values for the upwelling irradiance attenuation coefficient at 685 nm which are in close agreement with the observations.
NASA Technical Reports Server (NTRS)
Hansman, R. J., Jr.
1982-01-01
The feasibility of computerized simulation of the physics of advanced microwave anti-icing systems, which preheat impinging supercooled water droplets prior to impact, was investigated. Theoretical and experimental work performed to create a physically realistic simulation is described. The behavior of the absorption cross section for melting ice particles was measured by a resonant cavity technique and found to agree with theoretical predictions. Values of the dielectric parameters of supercooled water were measured by a similar technique at lambda = 2.82 cm down to -17 C. The hydrodynamic behavior of accelerated water droplets was studied photograhically in a wind tunnel. Droplets were found to initially deform as oblate spheroids and to eventually become unstable and break up in Bessel function modes for large values of acceleration or droplet size. This confirms the theory as to the maximum stable droplet size in the atmosphere. A computer code which predicts droplet trajectories in an arbitrary flow field was written and confirmed experimentally. The results were consolidated into a simulation to study the heating by electromagnetic fields of droplets impinging onto an object such as an airfoil. It was determined that there is sufficient time to heat droplets prior to impact for typical parameter values. Design curves for such a system are presented.
NASA Astrophysics Data System (ADS)
Holman, Benjamin R.
In recent years, revolutionary "hybrid" or "multi-physics" methods of medical imaging have emerged. By combining two or three different types of waves these methods overcome limitations of classical tomography techniques and deliver otherwise unavailable, potentially life-saving diagnostic information. Thermoacoustic (and photoacoustic) tomography is the most developed multi-physics imaging modality. Thermo- and photo- acoustic tomography require reconstructing initial acoustic pressure in a body from time series of pressure measured on a surface surrounding the body. For the classical case of free space wave propagation, various reconstruction techniques are well known. However, some novel measurement schemes place the object of interest between reflecting walls that form a de facto resonant cavity. In this case, known methods cannot be used. In chapter 2 we present a fast iterative reconstruction algorithm for measurements made at the walls of a rectangular reverberant cavity with a constant speed of sound. We prove the convergence of the iterations under a certain sufficient condition, and demonstrate the effectiveness and efficiency of the algorithm in numerical simulations. In chapter 3 we consider the more general problem of an arbitrarily shaped resonant cavity with a non constant speed of sound and present the gradual time reversal method for computing solutions to the inverse source problem. It consists in solving back in time on the interval [0, T] the initial/boundary value problem for the wave equation, with the Dirichlet boundary data multiplied by a smooth cutoff function. If T is sufficiently large one obtains a good approximation to the initial pressure; in the limit of large T such an approximation converges (under certain conditions) to the exact solution.
Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates
NASA Astrophysics Data System (ADS)
Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.
2010-04-01
Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.
Zhang, Chun; Fan, Kai; Ma, Xuefeng; Wei, Dongzhi
2012-01-01
Uricase has proven therapeutic value in treating hyperuricemia but sufficient reduction of its immunogenicity may be the largest obstacle to its chronic use. In this study, canine uricase was modified with 5 kDa mPEG-SPA and the impact of large aggregated uricases and cross-linked conjugates induced by difunctional PEG diol on immunogenicity was investigated. Recombinant canine uricase was first expressed and purified to homogeneity. Source 15Q anion-exchange chromatography was used to separate tetrameric and aggregated uricase prior to pegylation, while DEAE anion-exchange chromatography was used to remove Di-acid PEG (precursor of PEG diol) from unfractionated 5 kDa mPEG-propionic acid. Tetrameric and aggregated uricases were separately modified with the purified mPEG-SPA. In addition, tetrameric uricases was modified with unfractionated mPEG-SPA, resulting in three types of 5 kDa mPEG-SPA modified uricase. The conjugate size was evaluated by dynamic light scattering and transmission electron microscope. The influence of differently PEGylated uricases on pharmacokinetics and immunogenicity were evaluated in vivo. The accelerated blood clearance (ABC) phenomenon previously identified for PEGylated liposomes occurred in rats injected with PEGylated uricase aggregates. Anti-PEG IgM antibodies, rather than neutralizing antibodies, were found to mediate the ABC. The size of conjugates is important for triggering such phenomena and we speculate that 40-60 nm is the lower size limit that can trigger ABC. Removal of the uricase aggregates and the PEG diol contaminant and modifying with small PEG reagents enabled ABC to be successfully avoided and sufficient reduction in the immunogenicity of 5 kDa mPEG-modified tetrameric canine uricase.
Zhang, Chun; Fan, Kai; Ma, Xuefeng; Wei, Dongzhi
2012-01-01
Background Uricase has proven therapeutic value in treating hyperuricemia but sufficient reduction of its immunogenicity may be the largest obstacle to its chronic use. In this study, canine uricase was modified with 5 kDa mPEG-SPA and the impact of large aggregated uricases and cross-linked conjugates induced by difunctional PEG diol on immunogenicity was investigated. Methods and Findings Recombinant canine uricase was first expressed and purified to homogeneity. Source 15Q anion-exchange chromatography was used to separate tetrameric and aggregated uricase prior to pegylation, while DEAE anion-exchange chromatography was used to remove Di-acid PEG (precursor of PEG diol) from unfractionated 5 kDa mPEG-propionic acid. Tetrameric and aggregated uricases were separately modified with the purified mPEG-SPA. In addition, tetrameric uricases was modified with unfractionated mPEG-SPA, resulting in three types of 5 kDa mPEG-SPA modified uricase. The conjugate size was evaluated by dynamic light scattering and transmission electron microscope. The influence of differently PEGylated uricases on pharmacokinetics and immunogenicity were evaluated in vivo. The accelerated blood clearance (ABC) phenomenon previously identified for PEGylated liposomes occurred in rats injected with PEGylated uricase aggregates. Anti-PEG IgM antibodies, rather than neutralizing antibodies, were found to mediate the ABC. Conclusions The size of conjugates is important for triggering such phenomena and we speculate that 40–60 nm is the lower size limit that can trigger ABC. Removal of the uricase aggregates and the PEG diol contaminant and modifying with small PEG reagents enabled ABC to be successfully avoided and sufficient reduction in the immunogenicity of 5 kDa mPEG-modified tetrameric canine uricase. PMID:22745806
Scaling analyses of the spectral dimension in 3-dimensional causal dynamical triangulations
NASA Astrophysics Data System (ADS)
Cooperman, Joshua H.
2018-05-01
The spectral dimension measures the dimensionality of a space as witnessed by a diffusing random walker. Within the causal dynamical triangulations approach to the quantization of gravity (Ambjørn et al 2000 Phys. Rev. Lett. 85 347, 2001 Nucl. Phys. B 610 347, 1998 Nucl. Phys. B 536 407), the spectral dimension exhibits novel scale-dependent dynamics: reducing towards a value near 2 on sufficiently small scales, matching closely the topological dimension on intermediate scales, and decaying in the presence of positive curvature on sufficiently large scales (Ambjørn et al 2005 Phys. Rev. Lett. 95 171301, Ambjørn et al 2005 Phys. Rev. D 72 064014, Benedetti and Henson 2009 Phys. Rev. D 80 124036, Cooperman 2014 Phys. Rev. D 90 124053, Cooperman et al 2017 Class. Quantum Grav. 34 115008, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151, Kommu 2012 Class. Quantum Grav. 29 105003). I report the first comprehensive scaling analysis of the small-to-intermediate scale spectral dimension for the test case of the causal dynamical triangulations of 3-dimensional Einstein gravity. I find that the spectral dimension scales trivially with the diffusion constant. I find that the spectral dimension is completely finite in the infinite volume limit, and I argue that its maximal value is exactly consistent with the topological dimension of 3 in this limit. I find that the spectral dimension reduces further towards a value near 2 as this case’s bare coupling approaches its phase transition, and I present evidence against the conjecture that the bare coupling simply sets the overall scale of the quantum geometry (Ambjørn et al 2001 Phys. Rev. D 64 044011). On the basis of these findings, I advance a tentative physical explanation for the dynamical reduction of the spectral dimension observed within causal dynamical triangulations: branched polymeric quantum geometry on sufficiently small scales. My analyses should facilitate attempts to employ the spectral dimension as a physical observable with which to delineate renormalization group trajectories in the hope of taking a continuum limit of causal dynamical triangulations at a nontrivial ultraviolet fixed point (Ambjørn et al 2016 Phys. Rev. D 93 104032, 2014 Class. Quantum Grav. 31 165003, Cooperman 2016 Gen. Relativ. Gravit. 48 1, Cooperman 2016 arXiv:1604.01798, Coumbe and Jurkiewicz 2015 J. High Energy Phys. JHEP03(2015)151).
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages of large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable riv...
The measurement of energy exchange in man: an analysis.
Webb, P
1980-06-01
This report analyzes two kinds of studies of human energy balance; direct and indirect calorimetry for 24-hr periods, and complete measurements of food intake, waste, and tissue storage for 3 weeks and longer. Equations of energy balance are written to show that the daily quantity of metabolic energy, QM, is coupled with an unidentified quantity of unmeasured energy, QX, in order to make the equation balance. The equations challenge the assumed equivalence of direct and indirect calorimetry. The analysis takes the form of employing experimental data to calculate values for the arguable quantity, QX. Studies employing 24-hr direct calorimetry, 202 complete days, show that when food intake nearly matches QM, values for QX are small and probably insignificant, but when there is a large food deficit, large positive values for QX appear. Calculations are also made from studies of nutrient balance during prolonged overeating and undereating, and in nearly all cases there were large negative values for QX. In 52 sets of data from studies lasting 3 weeks or longer, where all the terms in the balance equation except QX were either directly measured or could be readily estimated, the average value for QX amounts to 705 kcal/day, or 27% of QM. A discussion of the nature of QX considers error and the noninclusion of small quantities like the energy of combustible gases, which are not thought to be sufficient to explain QX. It might represent the cost of mobilizing stored fuel, or of storing excess fuel, or it might represent a change in internal energy other than fuel stores, but none of these is thought to be likely. Finally, it is emphasized that entropy exchange in man as an open thermodynamic system is not presently included in the equations of energy balance, and perhaps it must be, even though it is not directly measurable. The significance of unmeasured energy is considered in light of the poor control of obesity, of the inability to predict weight change during prolonged diet restriction or intentional overeating, and of the energetics of tissue gain in growth and loss in cachexia. It is not even well established how much food man requires to maintain constant weight. New studies as they are undertaken should try to account completely for all the possible terms of energy exchange.
Responses of large mammals to climate change.
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.
Responses of large mammals to climate change
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293
Time-dependent fiber bundles with local load sharing. II. General Weibull fibers.
Phoenix, S Leigh; Newman, William I
2009-12-01
Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent rho , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, beta. Thus the failure rate of a fiber depends on its past load history, except for beta=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. E 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 10(6) fibers in 10(3) realizations). In particular, our algorithm is O(N ln N) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (beta,rho) pairs that yield contrasting behavior for large N. For rho>1 and large N, brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N-->infinity , unlike ELS, which yields a finite limiting mean. For 1/2
Time-dependent fiber bundles with local load sharing. II. General Weibull fibers
NASA Astrophysics Data System (ADS)
Phoenix, S. Leigh; Newman, William I.
2009-12-01
Fiber bundle models (FBMs) are useful tools in understanding failure processes in a variety of material systems. While the fibers and load sharing assumptions are easily described, FBM analysis is typically difficult. Monte Carlo methods are also hampered by the severe computational demands of large bundle sizes, which overwhelm just as behavior relevant to real materials starts to emerge. For large size scales, interest continues in idealized FBMs that assume either equal load sharing (ELS) or local load sharing (LLS) among fibers, rules that reflect features of real load redistribution in elastic lattices. The present work focuses on a one-dimensional bundle of N fibers under LLS where life consumption in a fiber follows a power law in its load, with exponent ρ , and integrated over time. This life consumption function is further embodied in a functional form resulting in a Weibull distribution for lifetime under constant fiber stress and with Weibull exponent, β . Thus the failure rate of a fiber depends on its past load history, except for β=1 . We develop asymptotic results validated by Monte Carlo simulation using a computational algorithm developed in our previous work [Phys. Rev. EPLEEE81063-651X 63, 021507 (2001)] that greatly increases the size, N , of treatable bundles (e.g., 106 fibers in 103 realizations). In particular, our algorithm is O(NlnN) in contrast with former algorithms which were O(N2) making this investigation possible. Regimes are found for (β,ρ) pairs that yield contrasting behavior for large N . For ρ>1 and large N , brittle weakest volume behavior emerges in terms of characteristic elements (groupings of fibers) derived from critical cluster formation, and the lifetime eventually goes to zero as N→∞ , unlike ELS, which yields a finite limiting mean. For 1/2≤ρ≤1 , however, LLS has remarkably similar behavior to ELS (appearing to be virtually identical for ρ=1 ) with an asymptotic Gaussian lifetime distribution and a finite limiting mean for large N . The coefficient of variation follows a power law in increasing N but, except for ρ=1 , the value of the negative exponent is clearly less than 1/2 unlike in ELS bundles where the exponent remains 1/2 for 1/2<ρ≤1 . For sufficiently small values 0<ρ≪1 , a transition occurs, depending on β , whereby LLS bundle lifetimes become dominated by a few long-lived fibers. Thus the bundle lifetime appears to approximately follow an extreme-value distribution for the longest lived of a parallel group of independent elements, which applies exactly to ρ=0 . The lower the value of β , the higher the transition value of ρ , below which such extreme-value behavior occurs. No evidence was found for limiting Gaussian behavior for ρ>1 but with 0<β(ρ+1)<1 , as might be conjectured from quasistatic bundle models where β(ρ+1) mimics the Weibull exponent for fiber strength.
Reducible or irreducible? Mathematical reasoning and the ontological method.
Fisher, William P
2010-01-01
Science is often described as nothing but the practice of measurement. This perspective follows from longstanding respect for the roles mathematics and quantification have played as media through which alternative hypotheses are evaluated and experience becomes better managed. Many figures in the history of science and psychology have contributed to what has been called the "quantitative imperative," the demand that fields of study employ number and mathematics even when they do not constitute the language in which investigators think together. But what makes an area of study scientific is, of course, not the mere use of number, but communities of investigators who share common mathematical languages for exchanging quantitative and quantitative value. Such languages require rigorous theoretical underpinning, a basis in data sufficient to the task, and instruments traceable to reference standard quantitative metrics. The values shared and exchanged by such communities typically involve the application of mathematical models that specify the sufficient and invariant relationships necessary for rigorous theorizing and instrument equating. The mathematical metaphysics of science are explored with the aim of connecting principles of quantitative measurement with the structures of sufficient reason.
The value of percutaneous cholangiography
Evison, Gordon; McNulty, Myles; Thomson, Colin
1973-01-01
Percutaneous cholangiograms performed on fifty patients in a district general hospital have been reviewed, and the advantages and limitations of the examination are described. The investigation is considered to have sufficient diagnostic value to warrant its inclusion in the diagnostic armamentarium of every general radiological department. ImagesFig. 1Fig. 2Fig. 3Fig. 4 PMID:4788917
The Value of Literacy. Working Paper.
ERIC Educational Resources Information Center
Bulkeley, Christy C.
Literacy is a commodity of measurable value to those who acquire it. This proposition is easy to accept if the many benefits of acquiring literacy are considered: better jobs, more productive use of leisure time, greater self-sufficiency, increased ability to help one's children with school work and hobbies. The Gannett Foundation first became…
7 CFR 764.356 - Appraisal and valuation requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... of livestock and records of livestock product sales sufficient to allow the Agency to value such... disaster, the value of such security shall be established as of the day before the disaster occurred. Effective Date Note: At 76 FR 75435, Dec. 2, 2011, § 764.356 was amended by adding paragraph (c), effective...
Towards a Logical Distinction Between Swarms and Aftershock Sequences
NASA Astrophysics Data System (ADS)
Gardine, M.; Burris, L.; McNutt, S.
2007-12-01
The distinction between swarms and aftershock sequences has, up to this point, been fairly arbitrary and non- uniform. Typically 0.5 to 1 order of magnitude difference between the mainshock and largest aftershock has been a traditional choice, but there are many exceptions. Seismologists have generally assumed that the mainshock carries most of the energy, but this is only true if it is sufficiently large compared to the size and numbers of aftershocks. Here we present a systematic division based on energy of the aftershock sequence compared to the energy of the largest event of the sequence. It is possible to calculate the amount of aftershock energy assumed to be in the sequence using the b-value of the frequency-magnitude relation with a fixed choice of magnitude separation (M-mainshock minus M-largest aftershock). Assuming that the energy of an aftershock sequence is less than the energy of the mainshock, the b-value at which the aftershock energy exceeds that of the mainshock energy determines the boundary between aftershock sequences and swarms. The amount of energy for various choices of b-value is also calculated using different values of magnitude separation. When the minimum b-value at which the sequence energy exceeds that of the largest event/mainshock is plotted against the magnitude separation, a linear trend emerges. Values plotting above this line represent swarms and values plotting below it represent aftershock sequences. This scheme has the advantage that it represents a physical quantity - energy - rather than only statistical features of earthquake distributions. As such it may be useful to help distinguish swarms from mainshock/aftershock sequences and to better determine the underlying causes of earthquake swarms.
Minimal microwave anisotrophy from perturbations induced at late times
NASA Technical Reports Server (NTRS)
Jaffe, Andrew H.; Stebbins, Albert; Frieman, Joshua A.
1994-01-01
Aside from primordial gravitational instability of the cosmological fluid, various mechanisms have been proposed to generate large-scale structure at relatively late times, including, e.g., 'late-time' cosmological phase transitions. In these scenarios, it is envisioned that the universe is nearly homogeneous at the times of last scattering and that perturbations grow rapidly sometimes after the primordial plasma recombines. On this basis, it was suggested that large inhomogeneities could be generated while leaving relatively little imprint on the cosmic microwave background (MBR) anisotropy. In this paper, we calculate the minimal anisotropies possible in any 'late-time' scenario for structure formation, given the level of inhomogeneity observed at present. Since the growth of the inhomogeneity involves time-varying gravitational fields, these scenarios inevitably generate significant MBR anisotropy via the Sachs-Wolfe effect. Moreover, we show that the large-angle MBR anisotropy produced by the rapid post-recombination growth of inhomogeneity is generally greater than that produced by the same inhomogeneity growth via gravitational instability. In 'realistic' scenarios one can decrease the anisotropy compared to models with primordial adiabatic fluctuations, but only on very small angular scales. The value of any particular measure of the anisotropy can be made small in late-time models, but only by making the time-dependence of the gravitational field sufficiently 'pathological'.
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
NASA Astrophysics Data System (ADS)
Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard
2017-12-01
Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).
What Four Million Mappings Can Tell You about Two Hundred Ontologies
NASA Astrophysics Data System (ADS)
Ghazvinian, Amir; Noy, Natalya F.; Jonquet, Clement; Shah, Nigam; Musen, Mark A.
The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal—an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
Scalar discrete nonlinear multipoint boundary value problems
NASA Astrophysics Data System (ADS)
Rodriguez, Jesus; Taylor, Padraic
2007-06-01
In this paper we provide sufficient conditions for the existence of solutions to scalar discrete nonlinear multipoint boundary value problems. By allowing more general boundary conditions and by imposing less restrictions on the nonlinearities, we obtain results that extend previous work in the area of discrete boundary value problems [Debra L. Etheridge, Jesus Rodriguez, Periodic solutions of nonlinear discrete-time systems, Appl. Anal. 62 (1996) 119-137; Debra L. Etheridge, Jesus Rodriguez, Scalar discrete nonlinear two-point boundary value problems, J. Difference Equ. Appl. 4 (1998) 127-144].
The CODATA 2017 values of h, e, k, and N A for the revision of the SI
NASA Astrophysics Data System (ADS)
Newell, D. B.; Cabiati, F.; Fischer, J.; Fujii, K.; Karshenboim, S. G.; Margolis, H. S.; de Mirandés, E.; Mohr, P. J.; Nez, F.; Pachucki, K.; Quinn, T. J.; Taylor, B. N.; Wang, M.; Wood, B. M.; Zhang, Z.
2018-04-01
Sufficient progress towards redefining the International System of Units (SI) in terms of exact values of fundamental constants has been achieved. Exact values of the Planck constant h, elementary charge e, Boltzmann constant k, and Avogadro constant N A from the CODATA 2017 Special Adjustment of the Fundamental Constants are presented here. These values are recommended to the 26th General Conference on Weights and Measures to form the foundation of the revised SI.
Highly Viscous States Affect the Browning of Atmospheric Organic Particulate Matter
2018-01-01
Initially transparent organic particulate matter (PM) can become shades of light-absorbing brown via atmospheric particle-phase chemical reactions. The production of nitrogen-containing compounds is one important pathway for browning. Semisolid or solid physical states of organic PM might, however, have sufficiently slow diffusion of reactant molecules to inhibit browning reactions. Herein, organic PM of secondary organic material (SOM) derived from toluene, a common SOM precursor in anthropogenically affected environments, was exposed to ammonia at different values of relative humidity (RH). The production of light-absorbing organonitrogen imines from ammonia exposure, detected by mass spectrometry and ultraviolet–visible spectrophotometry, was kinetically inhibited for RH < 20% for exposure times of 6 min to 24 h. By comparison, from 20% to 60% RH organonitrogen production took place, implying ammonia uptake and reaction. Correspondingly, the absorption index k across 280 to 320 nm increased from 0.012 to 0.02, indicative of PM browning. The k value across 380 to 420 nm increased from 0.001 to 0.004. The observed RH-dependent behavior of ammonia uptake and browning was well captured by a model that considered the diffusivities of both the large organic molecules that made up the PM and the small reactant molecules taken up from the gas phase into the PM. Within the model, large-molecule diffusivity was calculated based on observed SOM viscosity and evaporation. Small-molecule diffusivity was represented by the water diffusivity measured by a quartz-crystal microbalance. The model showed that the browning reaction rates at RH < 60% could be controlled by the low diffusivity of the large organic molecules from the interior region of the particle to the reactive surface region. The results of this study have implications for accurate modeling of atmospheric brown carbon production and associated influences on energy balance. PMID:29532020
NASA Astrophysics Data System (ADS)
Beckwith, A. W.
2008-01-01
Sean Carroll's pre-inflation state of low temperature-low entropy provides a bridge between two models with different predictions. The Wheeler-de Witt equation provides thermal input into today's universe for graviton production. Also, brane world models by Sundrum allow low entropy conditions, as given by Carroll & Chen (2005). Moreover, this paper answers the question of how to go from a brane world model to the 10 to the 32 power Kelvin conditions stated by Weinberg in 1972 as necessary for the initiation of quantum gravity processes. This is a way of getting around the fact CMBR is cut off at a red shift of z = 1100. This paper discusses the difference in values of the upper bound of the cosmological constant between a large upper bound predicated for a temperature dependent vacuum energy predicted by Park (2002), and the much lower bound predicted by Barvinsky (2006). with the difference in values in vacuum energy contributing to relic graviton production. This paper claims that this large thermal influx, with a high initial cosmological constant and a large region of space for relic gravitons interacting with space-time up to the z = 1100 CMBR observational limit are interlinked processes delineated in the Lloyd (2002) analogy of the universe as a quantum computing system. Finally, the paper claims that linking a shrinking prior universe via a worm hole solution for a pseudo time dependent Wheeler-De Witt equation permits graviton generation as thermal input from the prior universe, transferred instantaneously to relic inflationary conditions today. The existence of a wormhole is presented as a necessary condition for relic gravitons. Proving the sufficiency of the existence of a worm hole for relic gravitons is a future project.
Global variation of the dust-to-gas ratio in evolving protoplanetary discs
NASA Astrophysics Data System (ADS)
Hughes, Anna L. H.; Armitage, Philip J.
2012-06-01
Recent theories suggest planetesimal formation via streaming and/or gravitational instabilities may be triggered by localized enhancements in the dust-to-gas ratio, and one hypothesis is that sufficient enhancements may be produced in the pile-up of small solid particles inspiralling under aerodynamic drag from the large mass reservoir in the outer disc. Studies of particle pile-up in static gas discs have provided partial support for this hypothesis. Here, we study the radial and temporal evolution of the dust-to-gas ratio in turbulent discs that evolve under the action of viscosity and photoevaporation. We find that particle pile-ups do not generically occur within evolving discs, particularly if the introduction of large grains is restricted to the inner, dense regions of a disc. Instead, radial drift results in depletion of solids from the outer disc, while the inner disc maintains a dust-to-gas ratio that is within a factor of ˜2 of the initial value. We attribute this result to the short time-scales for turbulent diffusion and radial advection (with the mean gas flow) in the inner disc. We show that the qualitative evolution of the dust-to-gas ratio depends only weakly upon the parameters of the disc model (the disc mass, size, viscosity and value of the Schmidt number), and discuss the implications for planetesimal formation via collective instabilities. Our results suggest that in discs where there is a significant level of midplane turbulence and accretion, planetesimal formation would need to be possible in the absence of large-scale enhancements. Instead, trapping and concentration of particles within local turbulent structures may be required as a first stage of planetesimal formation.
Highly Viscous States Affect the Browning of Atmospheric Organic Particulate Matter.
Liu, Pengfei; Li, Yong Jie; Wang, Yan; Bateman, Adam P; Zhang, Yue; Gong, Zhaoheng; Bertram, Allan K; Martin, Scot T
2018-02-28
Initially transparent organic particulate matter (PM) can become shades of light-absorbing brown via atmospheric particle-phase chemical reactions. The production of nitrogen-containing compounds is one important pathway for browning. Semisolid or solid physical states of organic PM might, however, have sufficiently slow diffusion of reactant molecules to inhibit browning reactions. Herein, organic PM of secondary organic material (SOM) derived from toluene, a common SOM precursor in anthropogenically affected environments, was exposed to ammonia at different values of relative humidity (RH). The production of light-absorbing organonitrogen imines from ammonia exposure, detected by mass spectrometry and ultraviolet-visible spectrophotometry, was kinetically inhibited for RH < 20% for exposure times of 6 min to 24 h. By comparison, from 20% to 60% RH organonitrogen production took place, implying ammonia uptake and reaction. Correspondingly, the absorption index k across 280 to 320 nm increased from 0.012 to 0.02, indicative of PM browning. The k value across 380 to 420 nm increased from 0.001 to 0.004. The observed RH-dependent behavior of ammonia uptake and browning was well captured by a model that considered the diffusivities of both the large organic molecules that made up the PM and the small reactant molecules taken up from the gas phase into the PM. Within the model, large-molecule diffusivity was calculated based on observed SOM viscosity and evaporation. Small-molecule diffusivity was represented by the water diffusivity measured by a quartz-crystal microbalance. The model showed that the browning reaction rates at RH < 60% could be controlled by the low diffusivity of the large organic molecules from the interior region of the particle to the reactive surface region. The results of this study have implications for accurate modeling of atmospheric brown carbon production and associated influences on energy balance.
Can multi-subpopulation reference sets improve the genomic predictive ability for pigs?
Fangmann, A; Bergfelder-Drüing, S; Tholen, E; Simianer, H; Erbe, M
2015-12-01
In most countries and for most livestock species, genomic evaluations are obtained from within-breed analyses. To achieve reliable breeding values, however, a sufficient reference sample size is essential. To increase this size, the use of multibreed reference populations for small populations is considered a suitable option in other species. Over decades, the separate breeding work of different pig breeding organizations in Germany has led to stratified subpopulations in the breed German Large White. Due to this fact and the limited number of Large White animals available in each organization, there was a pressing need for ascertaining if multi-subpopulation genomic prediction is superior compared with within-subpopulation prediction in pigs. Direct genomic breeding values were estimated with genomic BLUP for the trait "number of piglets born alive" using genotype data (Illumina Porcine 60K SNP BeadChip) from 2,053 German Large White animals from five different commercial pig breeding companies. To assess the prediction accuracy of within- and multi-subpopulation reference sets, a random 5-fold cross-validation with 20 replications was performed. The five subpopulations considered were only slightly differentiated from each other. However, the prediction accuracy of the multi-subpopulations approach was not better than that of the within-subpopulation evaluation, for which the predictive ability was already high. Reference sets composed of closely related multi-subpopulation sets performed better than sets of distantly related subpopulations but not better than the within-subpopulation approach. Despite the low differentiation of the five subpopulations, the genetic connectedness between these different subpopulations seems to be too small to improve the prediction accuracy by applying multi-subpopulation reference sets. Consequently, resources should be used for enlarging the reference population within subpopulation, for example, by adding genotyped females.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Mechanism of explosive eruptions of Kilauea Volcano, Hawaii
Dvorak, J.J.
1992-01-01
A small explosive eruption of Kilauea Volcano, Hawaii, occurred in May 1924. The eruption was preceded by rapid draining of a lava lake and transfer of a large volume of magma from the summit reservoir to the east rift zone. This lowered the magma column, which reduced hydrostatic pressure beneath Halemaumau and allowed groundwater to flow rapidly into areas of hot rock, producing a phreatic eruption. A comparison with other events at Kilauea shows that the transfer of a large volume of magma out of the summit reservoir is not sufficient to produce a phreatic eruption. For example, the volume transferred at the beginning of explosive activity in May 1924 was less than the volumes transferred in March 1955 and January-February 1960, when no explosive activity occurred. Likewise, draining of a lava lake and deepening of the floor of Halemaumau, which occurred in May 1922 and August 1923, were not sufficient to produce explosive activity. A phreatic eruption of Kilauea requires both the transfer of a large volume of magma from the summit reservoir and the rapid removal of magma from near the surface, where the surrounding rocks have been heated to a sufficient temperature to produce steam explosions when suddenly contacted by groundwater. ?? 1992 Springer-Verlag.
Metabolic rates of giant pandas inform conservation strategies.
Fei, Yuxiang; Hou, Rong; Spotila, James R; Paladino, Frank V; Qi, Dunwu; Zhang, Zhihe
2016-06-06
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
NASA Astrophysics Data System (ADS)
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-06-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-01-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction. PMID:27264109
Latino, Diogo A R S; Wicker, Jörg; Gütlein, Martin; Schmid, Emanuel; Kramer, Stefan; Fenner, Kathrin
2017-03-22
Developing models for the prediction of microbial biotransformation pathways and half-lives of trace organic contaminants in different environments requires as training data easily accessible and sufficiently large collections of respective biotransformation data that are annotated with metadata on study conditions. Here, we present the Eawag-Soil package, a public database that has been developed to contain all freely accessible regulatory data on pesticide degradation in laboratory soil simulation studies for pesticides registered in the EU (282 degradation pathways, 1535 reactions, 1619 compounds and 4716 biotransformation half-life values with corresponding metadata on study conditions). We provide a thorough description of this novel data resource, and discuss important features of the pesticide soil degradation data that are relevant for model development. Most notably, the variability of half-life values for individual compounds is large and only about one order of magnitude lower than the entire range of median half-life values spanned by all compounds, demonstrating the need to consider study conditions in the development of more accurate models for biotransformation prediction. We further show how the data can be used to find missing rules relevant for predicting soil biotransformation pathways. From this analysis, eight examples of reaction types were presented that should trigger the formulation of new biotransformation rules, e.g., Ar-OH methylation, or the extension of existing rules, e.g., hydroxylation in aliphatic rings. The data were also used to exemplarily explore the dependence of half-lives of different amide pesticides on chemical class and experimental parameters. This analysis highlighted the value of considering initial transformation reactions for the development of meaningful quantitative-structure biotransformation relationships (QSBR), which is a novel opportunity offered by the simultaneous encoding of transformation reactions and corresponding half-lives in Eawag-Soil. Overall, Eawag-Soil provides an unprecedentedly rich collection of manually extracted and curated biotransformation data, which should be useful in a great variety of applications.
Optimal four-impulse rendezvous between coplanar elliptical orbits
NASA Astrophysics Data System (ADS)
Wang, JianXia; Baoyin, HeXi; Li, JunFeng; Sun, FuChun
2011-04-01
Rendezvous in circular or near circular orbits has been investigated in great detail, while rendezvous in arbitrary eccentricity elliptical orbits is not sufficiently explored. Among the various optimization methods proposed for fuel optimal orbital rendezvous, Lawden's primer vector theory is favored by many researchers with its clear physical concept and simplicity in solution. Prussing has applied the primer vector optimization theory to minimum-fuel, multiple-impulse, time-fixed orbital rendezvous in a near circular orbit and achieved great success. Extending Prussing's work, this paper will employ the primer vector theory to study trajectory optimization problems of arbitrary eccentricity elliptical orbit rendezvous. Based on linearized equations of relative motion on elliptical reference orbit (referred to as T-H equations), the primer vector theory is used to deal with time-fixed multiple-impulse optimal rendezvous between two coplanar, coaxial elliptical orbits with arbitrary large eccentricity. A parameter adjustment method is developed for the prime vector to satisfy the Lawden's necessary condition for the optimal solution. Finally, the optimal multiple-impulse rendezvous solution including the time, direction and magnitudes of the impulse is obtained by solving the two-point boundary value problem. The rendezvous error of the linearized equation is also analyzed. The simulation results confirmed the analyzed results that the rendezvous error is small for the small eccentricity case and is large for the higher eccentricity. For better rendezvous accuracy of high eccentricity orbits, a combined method of multiplier penalty function with the simplex search method is used for local optimization. The simplex search method is sensitive to the initial values of optimization variables, but the simulation results show that initial values with the primer vector theory, and the local optimization algorithm can improve the rendezvous accuracy effectively with fast convergence, because the optimal results obtained by the primer vector theory are already very close to the actual optimal solution. If the initial values are taken randomly, it is difficult to converge to the optimal solution.
Scalar excursions in large-eddy simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matheou, Georgios; Dimotakis, Paul E.
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
Scalar excursions in large-eddy simulations
Matheou, Georgios; Dimotakis, Paul E.
2016-08-31
Here, the range of values of scalar fields in turbulent flows is bounded by their boundary values, for passive scalars, and by a combination of boundary values, reaction rates, phase changes, etc., for active scalars. The current investigation focuses on the local conservation of passive scalar concentration fields and the ability of the large-eddy simulation (LES) method to observe the boundedness of passive scalar concentrations. In practice, as a result of numerical artifacts, this fundamental constraint is often violated with scalars exhibiting unphysical excursions. The present study characterizes passive-scalar excursions in LES of a shear flow and examines methods formore » diagnosis and assesment of the problem. The analysis of scalar-excursion statistics provides support of the main hypothesis of the current study that unphysical scalar excursions in LES result from dispersive errors of the convection-term discretization where the subgrid-scale model (SGS) provides insufficient dissipation to produce a sufficiently smooth scalar field. In the LES runs three parameters are varied: the discretization of the convection terms, the SGS model, and grid resolution. Unphysical scalar excursions decrease as the order of accuracy of non-dissipative schemes is increased, but the improvement rate decreases with increasing order of accuracy. Two SGS models are examined, the stretched-vortex and a constant-coefficient Smagorinsky. Scalar excursions strongly depend on the SGS model. The excursions are significantly reduced when the characteristic SGS scale is set to double the grid spacing in runs with the stretched-vortex model. The maximum excursion and volume fraction of excursions outside boundary values show opposite trends with respect to resolution. The maximum unphysical excursions increase as resolution increases, whereas the volume fraction decreases. The reason for the increase in the maximum excursion is statistical and traceable to the number of grid points (sample size) which increases with resolution. In contrast, the volume fraction of unphysical excursions decreases with resolution because the SGS models explored perform better at higher grid resolution.« less
Enthalpies of Formation of Hydrazine and Its Derivatives.
Dorofeeva, Olga V; Ryzhova, Oxana N; Suchkova, Taisiya A
2017-07-20
Enthalpies of formation, Δ f H 298 ° , in both the gas and condensed phase, and enthalpies of sublimation or vaporization have been estimated for hydrazine, NH 2 NH 2 , and its 36 various derivatives using quantum chemical calculations. The composite G4 method has been used along with isodesmic reaction schemes to derive a set of self-consistent high-accuracy gas-phase enthalpies of formation. To estimate the enthalpies of sublimation and vaporization with reasonable accuracy (5-20 kJ/mol), the method of molecular electrostatic potential (MEP) has been used. The value of Δ f H 298 ° (NH 2 NH 2 ,g) = 97.0 ± 3.0 kJ/mol was determined from 75 isogyric reactions involving about 50 reference species; for most of these species, the accurate Δ f H 298 ° (g) values are available in Active Thermochemical Tables (ATcT). The calculated value is in excellent agreement with the reported results of the most accurate models based on coupled cluster theory (97.3 kJ/mol, the average of six calculations). Thus, the difference between the values predicted by high-level theoretical calculations and the experimental value of Δ f H 298 ° (NH 2 NH 2 ,g) = 95.55 ± 0.19 kJ/mol recommended in the ATcT and other comprehensive reference sources is sufficiently large and requires further investigation. Different hydrazine derivatives have been also considered in this work. For some of them, both the enthalpy of formation in the condensed phase and the enthalpy of sublimation or vaporization are available; for other compounds, experimental data for only one of these properties exist. Evidence of accuracy of experimental data for the first group of compounds was provided by the agreement with theoretical Δ f H 298 ° (g) value. The unknown property for the second group of compounds was predicted using the MEP model. This paper presents a systematic comparison of experimentally determined enthalpies of formation and enthalpies of sublimation or vaporization with the results of calculations. Because of relatively large uncertainty in the estimated enthalpies of sublimation, it was not always possible to evaluate the accuracy of the experimental values; however, this model allowed us to detect large errors in the experimental data, as in the case of 5,5'-hydrazinebistetrazole. The enthalpies of formation and enthalpies of sublimation or vaporization have been predicted for the first time for ten hydrazine derivatives with no experimental data. A recommended set of self-consistent experimental and calculated gas-phase enthalpies of formation of hydrazine derivatives can be used as reference Δ f H 298 ° (g) values to predict the enthalpies of formation of various hydrazines by means of isodesmic reactions.
Topographic Enhancement of Vertical Mixing in the Southern Ocean
NASA Astrophysics Data System (ADS)
Mashayek, A.; Ferrari, R. M.; Merrifield, S.; St Laurent, L.
2016-02-01
Diapycnal turbulent mixing in the Southern Ocean is believed to play a role in setting the rate of the ocean Meridional Overturning Circulation (MOC), an important element of the global climate system. Whether this role is important, however, depends on the strength of this mixing, which remains poorly qualified on global scale. To address this question, a passive tracer was released upstream of the Drake Passage in 2009 as a part of the Diapycnal and Isopycnal Mixing Experiment in the Southern Ocean (DIMES). The mixing was then inferred from the vertical/diapycnal spreading of the tracer. The mixing was also calculated from microstructure measurements of shear and stratification. The diapycnal turbulent mixing inferred from the tracer was found to be an order of magnitude larger than that estimated with the microstructure probes at various locations along the path of the tracer. While the values inferred from tracer imply a key role played by mixing in setting the MOC, those based on localized measurements suggest otherwise. In this work we use a high resolution numerical ocean model of the Drake Passage region sampled in the DIMES experiment to explain that the difference between the two estimates arise from the large values of mixing encountered by the tracer, when it flows close to the bottom topography. We conclude that the large mixing close to the ocean bottom topography is sufficiently strong to play an important role in setting the Southern Ocean branch of the MOC below 2 km.
OpenStudio: A Platform for Ex Ante Incentive Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, Amir; Brackney, Larry; Parker, Andrew
Many utilities operate programs that provide ex ante (up front) incentives for building energy conservation measures (ECMs). A typical incentive program covers two kinds of ECMs. ECMs that deliver similar savings in different contexts are associated with pre-calculated 'deemed' savings values. ECMs that deliver different savings in different contexts are evaluated on a 'custom' per-project basis. Incentive programs often operate at less than peak efficiency because both deemed ECMs and custom projects have lengthy and effort-intensive review processes--deemed ECMs to gain confidence that they are sufficiently context insensitive, custom projects to ensure that savings are claimed appropriately. DOE's OpenStudio platformmore » can be used to automate ex ante processes and help utilities operate programs more efficiently, consistently, and transparently, resulting in greater project throughput and energy savings. A key concept of the platform is the OpenStudio Measure, a script that queries and transforms building energy models. Measures can be simple or surgical, e.g., applying different transformations based on space-type, orientation, etc. Measures represent ECMs explicitly and are easier to review than ECMs that are represented implicitly as the difference between a with-ECM and without-ECM models. Measures can be automatically applied to large numbers of prototype models--and instantiated from uncertainty distributions--facilitating the large scale analysis required to develop deemed savings values. For custom projects, Measures can also be used to calibrate existing building models, to automatically create code baseline models, and to perform quality assurance screening.« less
NASA Astrophysics Data System (ADS)
Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.
2010-08-01
In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Teaching Students about Plagiarism: An Internet Solution to an Internet Problem
ERIC Educational Resources Information Center
Snow, Eleanour
2006-01-01
The Internet has changed the ways that students think, learn, and write. Students have large amounts of information, largely anonymous and without clear copyright information, literally at their fingertips. Without sufficient guidance, the inappropriate use of this information seems inevitable. Plagiarism among college students is rising, due to…
A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS
In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
Cooling system for continuous metal casting machines
Draper, Robert; Sumpman, Wayne C.; Baker, Robert J.; Williams, Robert S.
1988-01-01
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles 19 against the inner surface of rim 13 at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers 30 through return pipes 25 distributed interstitially among the nozzles.
Cooling system for continuous metal casting machines
Draper, R.; Sumpman, W.C.; Baker, R.J.; Williams, R.S.
1988-06-07
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles against the inner surface of rim at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers through return pipes distributed interstitially among the nozzles. 9 figs.
Communication Patterns of Individualistic and Collective Cultures: A Value Based Comparison.
ERIC Educational Resources Information Center
Yang, Hwei-Jen
For Asian Americans, learning only the skills of verbal communication is not sufficient--they need to develop a sense of appreciation for eloquence, to understand the urgency of freedom of expression in a democratic society, and to internalize the value of speech as an instrument for self-enhancement. The remarkable differences between the East…
The Threshold of Toxicological Concern for prenatal developmental toxicity in rats and rabbits
van Ravenzwaay, B.; Jiang, X.; Luechtefeld, T.; Hartung, T.
2018-01-01
The Threshold Toxicological Concern (TTC) is based on the concept that in absence of experimental data reasonable assurance of safety can be given if exposure is sufficiently low. Using the REACH database the low 5th percentile of the NO(A)EL distribution, for prenatal developmental toxicity (OECD guideline 414) was determined. For rats, (434 NO(A)ELs values) for maternal toxicity, this value was 10 mg/kg-bw/day. For developmental toxicity (469 NO(A)ELs): 13 mg/kg-bw/day. For rabbits, (100 NO(A)ELs), the value for maternal toxicity was 4 mg/kg-bw/day, for developmental toxicity, (112 NO(A)EL values): 10 mg/kg-bw/day. The maternal organism may thus be slightly more sensitive than the fetus. Combining REACH- (industrial chemicals) and published BASF-data (mostly agrochemicals), 537 unique compounds with NO(A)EL values for developmental toxicity in rats and 150 in rabbits were evaluated. The low 5th percentile NO(A)EL for developmental toxicity in rats was 10 mg/kg-bw/day and 9.5 mg/kg-bw/day for rabbits. Using an assessment factor of 100, a TTC value for developmental toxicity of 100 µg/kg-bw/day for rats and 95 µg/kg-bw/day for rabbits is calculated. These values could serve as guidance whether or not to perform an animal experiment, if exposure is sufficiently low. In emergency situations this value may be useful for a first tier risk assessment. PMID:28645885
40 CFR 704.11 - Recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of this part must retain the following records for 3 years following the creation or compilation of... part. (b) Materials and documentation sufficient to verify or reconstruct the values submitted in the...
Mathematical aspects of assessing extreme events for the safety of nuclear plants
NASA Astrophysics Data System (ADS)
Potempski, Slawomir; Borysiewicz, Mieczyslaw
2015-04-01
In the paper the review of mathematical methodologies applied for assessing low frequencies of rare natural events like earthquakes, tsunamis, hurricanes or tornadoes, floods (in particular flash floods and surge storms), lightning, solar flares, etc., will be given in the perspective of the safety assessment of nuclear plants. The statistical methods are usually based on the extreme value theory, which deals with the analysis of extreme deviation from the median (or the mean). In this respect application of various mathematical tools can be useful, like: the extreme value theorem of Fisher-Tippett-Gnedenko leading to possible choices of general extreme value distributions, or the Pickands-Balkema-de Haan theorem for tail fitting, or the methods related to large deviation theory. In the paper the most important stochastic distributions relevant for performing rare events statistical analysis will be presented. This concerns, for example, the analysis of the data with the annual extreme values (maxima - "Annual Maxima Series" or minima), or the peak values, exceeding given thresholds at some periods of interest ("Peak Over Threshold"), or the estimation of the size of exceedance. Despite of the fact that there is a lack of sufficient statistical data directly containing rare events, in some cases it is still possible to extract useful information from existing larger data sets. As an example one can consider some data sets available from the web sites for floods, earthquakes or generally natural hazards. Some aspects of such data sets will be also presented taking into account their usefulness for the practical assessment of risk for nuclear power plants coming from extreme weather conditions.
Pira, E; Piolatto, P G
2012-01-01
The building industry entails the exposure to Respirable Crystalline Silica (RCS), though there is a large variability among different sectors. The environmental values reported for the current conditions seem to be relatively low. For example the mean exposure estimated by IOM for all industrial sectors in the EU is 0.07 mg/m3. There are few studies in the building sector which show similar values. This is obviously not representative of past exposure. Moreover, the problems of sampling and analysis techniques are still at issue. The well known effect of RCS exposure is silicosis. The carcinogenicity of RCS is still under debate, especially regarding the question of whether RCS is carcinogenic "per se" or whether the risk of developing lung cancer is mediated by silicosis. Although the IARC includes RCS in the Group I (human carcinogen), the reference should be the CLP regulation, of which carcinogen definition criteria allow to state that today there are not sufficient data to classify RCS as a carcinogen and that it seems more appropriate to include RCS in different STOT.RE categories. This is valid for building industry as well as for the other industrial sectors. In Italy the recommended exposure limit is the ACGIH value of 0.025 mg/m3. At EU level it is still debated which is the best choice, based on cost/benefits evaluation, among the following limit values: 0.2, 0.1 and 0.05 respectively. The authors obviously believe that the most protective value should be adopted.
Simulating maize yield and bomass with spatial variability of soil field capacity
Ma, Liwang; Ahuja, Lajpat; Trout, Thomas; Nolan, Bernard T.; Malone, Robert W.
2015-01-01
Spatial variability in field soil properties is a challenge for system modelers who use single representative values, such as means, for model inputs, rather than their distributions. In this study, the root zone water quality model (RZWQM2) was first calibrated for 4 yr of maize (Zea mays L.) data at six irrigation levels in northern Colorado and then used to study spatial variability of soil field capacity (FC) estimated in 96 plots on maize yield and biomass. The best results were obtained when the crop parameters were fitted along with FCs, with a root mean squared error (RMSE) of 354 kg ha–1 for yield and 1202 kg ha–1 for biomass. When running the model using each of the 96 sets of field-estimated FC values, instead of calibrating FCs, the average simulated yield and biomass from the 96 runs were close to measured values with a RMSE of 376 kg ha–1 for yield and 1504 kg ha–1 for biomass. When an average of the 96 FC values for each soil layer was used, simulated yield and biomass were also acceptable with a RMSE of 438 kg ha–1 for yield and 1627 kg ha–1 for biomass. Therefore, when there are large numbers of FC measurements, an average value might be sufficient for model inputs. However, when the ranges of FC measurements were known for each soil layer, a sampled distribution of FCs using the Latin hypercube sampling (LHS) might be used for model inputs.
Skewness and kurtosis analysis for non-Gaussian distributions
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2018-06-01
In this paper we address a number of pitfalls regarding the use of kurtosis as a measure of deviations from the Gaussian. We treat kurtosis in both its standard definition and that which arises in q-statistics, namely q-kurtosis. We have recently shown that the relation proposed by Cristelli et al. (2012) between skewness and kurtosis can only be verified for relatively small data sets, independently of the type of statistics chosen; however it fails for sufficiently large data sets, if the fourth moment of the distribution is finite. For infinite fourth moments, kurtosis is not defined as the size of the data set tends to infinity. For distributions with finite fourth moments, the size, N, of the data set for which the standard kurtosis saturates to a fixed value, depends on the deviation of the original distribution from the Gaussian. Nevertheless, using kurtosis as a criterion for deciding which distribution deviates further from the Gaussian can be misleading for small data sets, even for finite fourth moment distributions. Going over to q-statistics, we find that although the value of q-kurtosis is finite in the range of 0 < q < 3, this quantity is not useful for comparing different non-Gaussian distributed data sets, unless the appropriate q value, which truly characterizes the data set of interest, is chosen. Finally, we propose a method to determine the correct q value and thereby to compute the q-kurtosis of q-Gaussian distributed data sets.
Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2016-06-01
Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.
Mitigation of ^{42}Ar/^{42}K background for the GERDA Phase II experiment
NASA Astrophysics Data System (ADS)
Lubashevskiy, A.; Agostini, M.; Budjáš, D.; Gangapshev, A.; Gusev, K.; Heisel, M.; Klimenko, A.; Lazzaro, A.; Lehnert, B.; Pelczar, K.; Schönert, S.; Smolnikov, A.; Walter, M.; Zuzel, G.
2018-01-01
Background coming from the ^{42}Ar decay chain is considered to be one of the most relevant for the Gerda experiment, which searches for the neutrinoless double beta decay of ^{76}Ge. The sensitivity strongly relies on the absence of background around the Q-value of the decay. Background coming from ^{42}K, a progeny of ^{42}Ar, can contribute to that background via electrons from the continuous spectrum with an endpoint at 3.5 MeV. Research and development on the suppression methods targeting this source of background were performed at the low-background test facility LArGe . It was demonstrated that by reducing ^{42}K ion collection on the surfaces of the broad energy germanium detectors in combination with pulse shape discrimination techniques and an argon scintillation veto, it is possible to suppress ^{42}K background by three orders of magnitude. This is sufficient for Phase II of the Gerda experiment.
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Conkright, Margarita E.
1999-01-01
The historical archives of in situ (National Oceanographic Data Center) and satellite (Coastal Zone Color Scanner) chlorophyll data were combined using the blended analysis method of Reynolds [1988] in an attempt to construct an improved climatological seasonal representation of global chlorophyll distributions. The results of the blended analysis differed dramatically from the CZCS representation: global chlorophyll estimates increased 8-35% in the blended analysis depending upon season. Regional differences were even larger, up to 140% in the equatorial Indian Ocean in summer (during the southwest monsoon). Tropical Pacific chlorophyll values increased 25-41%. The results suggested that the CZCS generally underestimates chlorophyll. Regional and seasonal differences in the blended analysis were sufficiently large as to produce a different representation of global chlorophyll distributions than otherwise inferred from CZCS data alone. Analyses of primary production and biogeochemical cycles may be substantially impacted by these results.
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
The Heliotail: Theory and Modeling
Pogorelov, N. V.
2016-05-31
Physical processes are discussed related to the heliotail which is formed when the solar wind interacts with the local interstellar medium. Although astrotails are commonly observed, the heliotail observations are only indirect. As a consequence, the direct comparison of the observed astrophysical objects and the Sun is impossible. This requires proper theoretical understanding of the heliotail formation and evolution, and numerical simulations in sufficiently large computational boxes. In this paper, we review some previous results related to the heliotail flow and show new simulations which demonstrate that the solar wind collimation inside the Parker spiral field lines diverted by themore » heliopause toward the heliotail is unrealistic. On the contrary, solar cycle effects ensure that the solar wind density reaches its largest values near the solar equatorial plane. We also argue that a realistic heliotail should be very long to account for the observed anisotropy of 1-10 TeV cosmic rays.« less
Continuous composition-spread thin films of transition metal oxides by pulsed-laser deposition
NASA Astrophysics Data System (ADS)
Ohkubo, I.; Christen, H. M.; Khalifah, P.; Sathyamurthy, S.; Zhai, H. Y.; Rouleau, C. M.; Mandrus, D. G.; Lowndes, D. H.
2004-02-01
We have designed an improved pulsed-laser deposition-continuous composition-spread (PLD-CCS) system that overcomes the difficulties associated with earlier related techniques. Our new PLD-CCS system is based on a precisely controlled synchronization between the laser firing, target exchange, and substrate translation/rotation, and offers more flexibility and control than earlier PLD-based approaches. Most importantly, the deposition energetics and the film thickness are kept constant across the entire composition range, and the resulting samples are sufficiently large to allow characterization by conventional techniques. We fabricated binary alloy composition-spread films composed of SrRuO 3 and CaRuO 3. Alternating ablation from two different ceramic targets leads to in situ alloy formation, and the value of x in Sr xCa x-1 RuO 3 can be changed linearly from 0 to 1 (or over any arbitrarily smaller range) along one direction of the substrate.
Use of mucolytics to enhance magnetic particle retention at a model airway surface
NASA Astrophysics Data System (ADS)
Ally, Javed; Roa, Wilson; Amirfazli, A.
A previous study has shown that retention of magnetic particles at a model airway surface requires prohibitively strong magnetic fields. As mucus viscoelasticity is the most significant factor contributing to clearance of magnetic particles from the airway surface, mucolytics are considered in this study to reduce mucus viscoelasticity and enable particle retention with moderate strength magnetic fields. The excised frog palate model was used to simulate the airway surface. Two mucolytics, N-acetylcysteine (NAC) and dextran sulfate (DS) were tested. NAC was found to enable retention at moderate field values (148 mT with a gradient of 10.2 T/m), whereas DS was found to be effective only for sufficiently large particle concentrations at the airway surface. The possible mechanisms for the observed behavior with different mucolytics are also discussed based on aggregate formation and the loading of cilia.
NASA Technical Reports Server (NTRS)
Wiebe, H. A.; Heicklen, J.
1972-01-01
The photolysis of CH3ONO, alone and in the presence of NO, NO-N2 mixtures, and NO-CO mixtures was studied between 25 and 150 C. The major products are CH2O, N2O, and H2O. The quantum yields of N2O were measured. The N2O yield is large at low pressures but approaches a high-pressure limiting value of 0.055 at all temperatures as the excited CH3O produced in the primary step is stabilized by collision. In the presence of excess CO, and N2O yield drops, and CO2 is produced (though not in sufficient amounts to account for the drop in N2O). When pure CH2ONO is photolyzed, CO is produced and NO accumulates in the system. Both products are formed in related processes and result from CH3O attack on CH2O.
A Variational Statistical-Field Theory for Polar Liquid Mixtures
NASA Astrophysics Data System (ADS)
Zhuang, Bilin; Wang, Zhen-Gang
Using a variational field-theoretic approach, we derive a molecularly-based theory for polar liquid mixtures. The resulting theory consists of simple algebraic expressions for the free energy of mixing and the dielectric constant as functions of mixture composition. Using only the dielectric constants and the molar volumes of the pure liquid constituents, the theory evaluates the mixture dielectric constants in good agreement with the experimental values for a wide range of liquid mixtures, without using adjustable parameters. In addition, the theory predicts that liquids with similar dielectric constants and molar volumes dissolve well in each other, while sufficient disparity in these parameters result in phase separation. The calculated miscibility map on the dielectric constant-molar volume axes agrees well with known experimental observations for a large number of liquid pairs. Thus the theory provides a quantification for the well-known empirical ``like-dissolves-like'' rule. Bz acknowledges the A-STAR fellowship for the financial support.
Similarity principles for the biology of pelagic animals
Barenblatt, G. I.; Monin, A. S.
1983-01-01
A similarity principle is formulated according to which the statistical pattern of the pelagic population is identical in all scales sufficiently large in comparison with the molecular one. From this principle, a power law is obtained analytically for the pelagic animal biomass distribution over the animal sizes. A hypothesis is presented according to which, under fixed external conditions, the oxygen exchange intensity of an animal is governed only by its mass and density and by the specific absorbing capacity of the animal's respiratory organ. From this hypothesis a power law is obtained by the method of dimensional analysis for the exchange intensity mass dependence. The known empirical values of the exponent of this power law are interpreted as an indication that the oxygen-absorbing organs of the animals can be represented as so-called fractal surfaces. In conclusion the biological principle of the decrease in specific exchange intensity with increase in animal mass is discussed. PMID:16593327
The calculation of aquifer chemistry in hot-water geothermal systems
Truesdell, Alfred H.; Singers, Wendy
1974-01-01
The temperature and chemical conditions (pH, gas pressure, and ion activities) in a geothermal aquifer supplying a producing bore can be calculated from the enthalpy of the total fluid (liquid + vapor) produced and chemical analyses of water and steam separated and collected at known pressures. Alternatively, if a single water phase exists in the aquifer, the complete analysis (including gases) of a sample collected from the aquifer by a downhole sampler is sufficient to determine the aquifer chemistry without a measured value of the enthalpy. The assumptions made are that the fluid is produced from a single aquifer and is homogeneous in enthalpy and chemical composition. These calculations of aquifer chemistry involving large amounts of ancillary information and many iterations require computer methods. A computer program in PL-1 to perform these calculations is available from the National Technical Information Service as document PB-219 376.
NASA Astrophysics Data System (ADS)
Syrunin, M. A.; Fedorenko, A. G.
2006-08-01
We have shown experimentally that, for cylindrical shells made of oriented fiberglass platic and basalt plastic there exists a critical level of deformations, at which a structure sustains a given number of explosions from the inside. The magnitude of critical deformation for cylindrical fiberglass shells depends linearly on the logarithm of the number of loads that cause failure. For a given type of fiberglass, there is a limiting level of explosive action, at which the number of loads that do not lead to failure can be sufficiently large (more than ˜ 102). This level is attained under loads, which are an order of magnitude lower than the limiting loads under a single explosive action. Basalt plastic shells can be repeatedly used even at the loads, which cause deformation by ˜ 30-50% lower than the safe value ˜ 3.3.5% at single loading.
Comparative performance evaluation of transform coding in image pre-processing
NASA Astrophysics Data System (ADS)
Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha
2017-07-01
We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.
NASA Technical Reports Server (NTRS)
Lyttleton, R. A.
1973-01-01
The terrestrial planets aggregated essentially from small particles, to begin as solid cool bodies with the same general compositions, and there is no possibility of an iron-core developing within any of them at any stage. Their differing internal and surface properties receive ready explanation from their different masses which determine whether the pressures within are sufficient to bring about phase-changes. The claim that the terrestrial core can be identified by means of shock-wave data as nickel-iron is based on theoretical misconception, whereas the actual seismic data establish an uncompressed-density value much lower than any such mixture could have. The onset of the Ramsey phase-change in the earth takes the form of a rapid initial collapse to produce a large core in metallic state which thereafter continues to grow secularly as a result of radioactive heating and leads to reduction of surface-area at long last adequate to account for folded and thrusted mountain-building.
Dense motion estimation using regularization constraints on local parametric models.
Patras, Ioannis; Worring, Marcel; van den Boomgaard, Rein
2004-11-01
This paper presents a method for dense optical flow estimation in which the motion field within patches that result from an initial intensity segmentation is parametrized with models of different order. We propose a novel formulation which introduces regularization constraints between the model parameters of neighboring patches. In this way, we provide the additional constraints for very small patches and for patches whose intensity variation cannot sufficiently constrain the estimation of their motion parameters. In order to preserve motion discontinuities, we use robust functions as a regularization mean. We adopt a three-frame approach and control the balance between the backward and forward constraints by a real-valued direction field on which regularization constraints are applied. An iterative deterministic relaxation method is employed in order to solve the corresponding optimization problem. Experimental results show that the proposed method deals successfully with motions large in magnitude, motion discontinuities, and produces accurate piecewise-smooth motion fields.
Seismogenic width controls aspect ratios of earthquake ruptures
NASA Astrophysics Data System (ADS)
Weng, Huihui; Yang, Hongfeng
2017-03-01
We investigate the effect of seismogenic width on aspect ratios of earthquake ruptures by using numerical simulations of strike-slip faulting and an energy balance criterion near rupture tips. If the seismogenic width is smaller than a critical value, then ruptures cannot break the entire fault, regardless of the size of the nucleation zone. The seismic moments of these self-arresting ruptures increase with the nucleation size, forming nucleation-related events. The aspect ratios increase with the seismogenic width but are smaller than 8. In contrast, ruptures become breakaway and tend to have high aspect ratios (>8) if the seismogenic width is sufficiently large. But the critical nucleation size is larger than the theoretical estimate for an unbounded fault. The eventual seismic moments of breakaway ruptures do not depend on the nucleation size. Our results suggest that estimating final earthquake magnitude from the nucleation phase may only be plausible on faults with small seismogenic width.
Modeling Tool for Decision Support during Early Days of an Anthrax Event.
Rainisch, Gabriel; Meltzer, Martin I; Shadomy, Sean; Bower, William A; Hupert, Nathaniel
2017-01-01
Health officials lack field-implementable tools for forecasting the effects that a large-scale release of Bacillus anthracis spores would have on public health and hospitals. We created a modeling tool (combining inhalational anthrax caseload projections based on initial case reports, effects of variable postexposure prophylaxis campaigns, and healthcare facility surge capacity requirements) to project hospitalizations and casualties from a newly detected inhalation anthrax event, and we examined the consequences of intervention choices. With only 3 days of case counts, the model can predict final attack sizes for simulated Sverdlovsk-like events (1979 USSR) with sufficient accuracy for decision making and confirms the value of early postexposure prophylaxis initiation. According to a baseline scenario, hospital treatment volume peaks 15 days after exposure, deaths peak earlier (day 5), and recovery peaks later (day 23). This tool gives public health, hospital, and emergency planners scenario-specific information for developing quantitative response plans for this threat.
Automated Analysis of Fluorescence Microscopy Images to Identify Protein-Protein Interactions
Venkatraman, S.; Doktycz, M. J.; Qi, H.; ...
2006-01-01
The identification of protein interactions is important for elucidating biological networks. One obstacle in comprehensive interaction studies is the analyses of large datasets, particularly those containing images. Development of an automated system to analyze an image-based protein interaction dataset is needed. Such an analysis system is described here, to automatically extract features from fluorescence microscopy images obtained from a bacterial protein interaction assay. These features are used to relay quantitative values that aid in the automated scoring of positive interactions. Experimental observations indicate that identifying at least 50% positive cells in an image is sufficient to detect a protein interaction.more » Based on this criterion, the automated system presents 100% accuracy in detecting positive interactions for a dataset of 16 images. Algorithms were implemented using MATLAB and the software developed is available on request from the authors.« less
Optical Properties of Aerosol Types from Satellite and Ground-based Observations
NASA Astrophysics Data System (ADS)
Lin, Tang-Huang; Liu, Gin-Rong; Liu, Chian-Yi
2014-05-01
In this study, the properties of aerosol types are characterized from the aspects of remote sensing and in situ measurements. Particles of dust, smoke and anthropogenic pollutant are selected as the principal types in the study. The measurements of AERONET sites and MODIS data, during the dust storm and biomass burning events in the period from 2002 to 2008, suggest that the aerosol species can be discriminated sufficiently based on the dissimilarity of AE (Ångström exponent) and SSA (single scattering albedo) properties. However, the physicochemical characteristics of source aerosols can be altered after the external/internal combination along the pathway of transportation, thus induce error to the satellite retrievals. In order to eliminate from this kind of errors, the optical properties of mixed aerosols (external) are also simulated with the database of dust and soot aggregates in this study. The preliminary results show that SSA value (at 470 nm) of mineral dust may decay 5-11 % when external mixed with 15-30 % soot aggregates, then result in 11-22 % variation of reflectance observed from satellite which could lead to sufficiently large uncertainty on the retrieval of aerosol optical thickness. As a result, the effect of heterogeneous mixture should be taken into account for more accurate retrieval of aerosol properties, especially after the long-range transport. Keywords: Aerosol type, Ångström exponent, Single scattering albedo, AERONET, MODIS, External mixture
CaFe2O4 as a self-sufficient solar energy converter
NASA Astrophysics Data System (ADS)
Tablero, C.
2017-10-01
An ideal solar energy to electricity or fuel converter should work without the use of any external bias potential. An analysis of self-sufficiency when CaFe2O4 is used to absorb the sunlight is carried out based on the CaFe2O4 absorption coefficient. We started to obtain this coefficient theoretically within the experimental bandgap range in order to fix the interval of possible values of photocurrents, maximum absorption efficiencies, and photovoltages and thus that of self-sufficiency considering only the radiative processes. Also for single-gap CaFe2O4, we evaluate an alternative for increasing the photocurrent and maximum absorption efficiency based on inserting an intermediate band using high doping or alloying.
Diversity and Community Can Coexist.
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2016-03-01
We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.
TCR-engineered, customized, antitumor T cells for cancer immunotherapy: advantages and limitations.
Chhabra, Arvind
2011-01-05
The clinical outcome of the traditional adoptive cancer immunotherapy approaches involving the administration of donor-derived immune effectors, expanded ex vivo, has not met expectations. This could be attributed, in part, to the lack of sufficient high-avidity antitumor T-cell precursors in most cancer patients, poor immunogenicity of cancer cells, and the technological limitations to generate a sufficiently large number of tumor antigen-specific T cells. In addition, the host immune regulatory mechanisms and immune homeostasis mechanisms, such as activation-induced cell death (AICD), could further limit the clinical efficacy of the adoptively administered antitumor T cells. Since generation of a sufficiently large number of potent antitumor immune effectors for adoptive administration is critical for the clinical success of this approach, recent advances towards generating customized donor-specific antitumor-effector T cells by engrafting human peripheral blood-derived T cells with a tumor-associated antigen-specific transgenic T-cell receptor (TCR) are quite interesting. This manuscript provides a brief overview of the TCR engineering-based cancer immunotherapy approach, its advantages, and the current limitations.
On the theory of electric double layer with explicit account of a polarizable co-solvent.
Budkov, Yu A; Kolesnikov, A L; Kiselev, M G
2016-05-14
We present a continuation of our theoretical research into the influence of co-solvent polarizability on a differential capacitance of the electric double layer. We formulate a modified Poisson-Boltzmann theory, using the formalism of density functional approach on the level of local density approximation taking into account the electrostatic interactions of ions and co-solvent molecules as well as their excluded volume. We derive the modified Poisson-Boltzmann equation, considering the three-component symmetric lattice gas model as a reference system and minimizing the grand thermodynamic potential with respect to the electrostatic potential. We apply present modified Poisson-Boltzmann equation to the electric double layer theory, showing that accounting for the excluded volume of co-solvent molecules and ions slightly changes the main result of our previous simplified theory. Namely, in the case of small co-solvent polarizability with its increase under the enough small surface potentials of electrode, the differential capacitance undergoes the significant growth. Oppositely, when the surface potential exceeds some threshold value (which is slightly smaller than the saturation potential), the increase in the co-solvent polarizability results in a differential capacitance decrease. However, when the co-solvent polarizability exceeds some threshold value, its increase generates a considerable enhancement of the differential capacitance in a wide range of surface potentials. We demonstrate that two qualitatively different behaviors of the differential capacitance are related to the depletion and adsorption of co-solvent molecules at the charged electrode. We show that an additive of the strongly polarizable co-solvent to an electrolyte solution can shift significantly the saturation potential in two qualitatively different manners. Namely, a small additive of strongly polarizable co-solvent results in a shift of saturation potential to higher surface potentials. On the contrary, a sufficiently large additive of co-solvent shifts the saturation potential to lower surface potentials. We obtain that an increase in the co-solvent polarizability makes the electrostatic potential profile longer-ranged. However, increase in the co-solvent concentration in the bulk leads to non-monotonic behavior of the electrostatic potential profile. An increase in the co-solvent concentration in the bulk at its sufficiently small values makes the electrostatic potential profile longer-ranged. Oppositely, when the co-solvent concentration in the bulk exceeds some threshold value, its further increase leads to decrease in electrostatic potential at all distances from the electrode.
BBD Reference Set Application: Jeffery Marks-Duke (2015) — EDRN Public Portal
We propose a pre-validation study for markers that could predict the likelihood of invasive breast cancer following a tissue diagnosis of benign breast pathology (any diagnosis that is less severe than carcinoma in situ). The study is designed to test the utility of a series of markers that were shown to have some predictive value by immunohistochemical staining in other cohorts. These markers include the proliferation associated antigen KI-67, EZH2, PTGS2 (COX2), ALDH1, CDKN2A (p16), HYAL1, MMP1, CEACAM6, and TP53. In addition, we propose analyzing two markers that comprise part of the DCIS Oncotype panel, GSTM1 and progesterone receptor (PR). The study will occur in two EDRN clinical validation center (CVC) laboratories, namely Duke and University of Kansas, and utilize specimens from Northwestern University and Geisinger Health System that have been identified and are either already sectioned or waiting to be sectioned. Results will be scored and returned to the DMCC to determine whether any of the markers or combinations of these markers may have sufficient value to proceed to a second stage validation with large numbers of samples from Geisenger Health Systems and the Henry Ford Hospital.
Mates but not sexes differ in migratory niche in a monogamous penguin species.
Thiebot, Jean-Baptiste; Bost, Charles-André; Dehnhard, Nina; Demongin, Laurent; Eens, Marcel; Lepoint, Gilles; Cherel, Yves; Poisbleau, Maud
2015-09-01
Strong pair bonds generally increase fitness in monogamous organisms, but may also underlie the risk of hampering it when re-pairing fails after the winter season. We investigated whether partners would either maintain contact or offset this risk by exploiting sex-specific favourable niches during winter in a migratory monogamous seabird, the southern rockhopper penguin Eudyptes chrysocome. Using light-based geolocation, we show that although the spatial distribution of both sexes largely overlapped, pair-wise mates were located on average 595 ± 260 km (and up to 2500 km) apart during winter. Stable isotope data also indicated a marked overlap between sex-specific isotopic niches (δ¹³C and δ¹⁵N values) but a segregation of the feeding habitats (δ¹³C values) within pairs. Importantly, the tracked females remained longer (12 days) at sea than males, but all re-mated with their previous partners after winter. Our study provides multiple evidence that migratory species may well demonstrate pair-wise segregation even in the absence of sex-specific winter niches (spatial and isotopic). We suggest that dispersive migration patterns with sex-biased timings may be a sufficient proximal cause for generating such a situation in migratory animals.
A Fast Variant of 1H Spectroscopic U-FLARE Imaging Using Adjusted Chemical Shift Phase Encoding
NASA Astrophysics Data System (ADS)
Ebel, Andreas; Dreher, Wolfgang; Leibfritz, Dieter
2000-02-01
So far, fast spectroscopic imaging (SI) using the U-FLARE sequence has provided metabolic maps indirectly via Fourier transformation (FT) along the chemical shift (CS) dimension and subsequent peak integration. However, a large number of CS encoding steps Nω is needed to cover the spectral bandwidth and to achieve sufficient spectral resolution for peak integration even if the number of resonance lines is small compared to Nω and even if only metabolic images are of interest and not the spectra in each voxel. Other reconstruction algorithms require extensive prior knowledge, starting values, and/or model functions. An adjusted CS phase encoding scheme (APE) can be used to overcome these drawbacks. It incorporates prior knowledge only about the resonance frequencies present in the sample. Thus, Nω can be reduced by a factor of 4 for many 1H in vivo studies while no spectra have to be reconstructed, and no additional user interaction, prior knowledge, starting values, or model function are required. Phantom measurements and in vivo experiments on rat brain have been performed at 4.7 T to test the feasibility of the method for proton SI.
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
Stellar winds in binary X-ray systems
NASA Technical Reports Server (NTRS)
Macgregor, K. B.; Vitello, P. A. J.
1982-01-01
It is thought that accretion from a strong stellar wind by a compact object may be responsible for the X-ray emission from binary systems containing a massive early-type primary. To investigate the effect of X-ray heating and ionization on the mass transfer process in systems of this type, an idealized model is constructed for the flow of a radiation-driven wind in the presence of an X-ray source of specified luminosity, L sub x. It is noted that for low values of L sub x, X-ray photoionization gives rise to additional ions having spectral lines with wavelengths situated near the peak of the primary continuum flux distribution. As a consequence, the radiation force acting on the gas increases in relation to its value in the absence of X-rays, and the wind is accelerated to higher velocities. As L sub x is increased, the degree of ionization of the wind increases, and the magnitude of the radiation force is diminished in comparison with the case in which L sub x = 0. This reduction leads at first to a decrease in the wind velocity and ultimately (for L sub x sufficiently large) to the termination of radiatively driven mass loss.
Quantum dynamics of the Einstein-Rosen wormhole throat
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunstatter, Gabor; Peltola, Ari; Louko, Jorma
2011-02-15
We consider the polymer quantization of the Einstein wormhole throat theory for an eternal Schwarzschild black hole. We numerically solve the difference equation describing the quantum evolution of an initially Gaussian, semiclassical wave packet. As expected from previous work on loop quantum cosmology, the wave packet remains semiclassical until it nears the classical singularity at which point it enters a quantum regime in which the fluctuations become large. The expectation value of the radius reaches a minimum as the wave packet is reflected from the origin and emerges to form a near-Gaussian but asymmetrical semiclassical state at late times. Themore » value of the minimum depends in a nontrivial way on the initial mass/energy of the pulse, its width, and the polymerization scale. For wave packets that are sufficiently narrow near the bounce, the semiclassical bounce radius is obtained. Although the numerics become difficult to control in this limit, we argue that for pulses of finite width the bounce persists as the polymerization scale goes to zero, suggesting that in this model the loop quantum gravity effects mimicked by polymer quantization do not play a crucial role in the quantum bounce.« less
Compensating the intensity fall-off effect in cone-beam tomography by an empirical weight formula.
Chen, Zikuan; Calhoun, Vince D; Chang, Shengjiang
2008-11-10
The Feldkamp-David-Kress (FDK) algorithm is widely adopted for cone-beam reconstruction due to its one-dimensional filtered backprojection structure and parallel implementation. In a reconstruction volume, the conspicuous cone-beam artifact manifests as intensity fall-off along the longitudinal direction (the gantry rotation axis). This effect is inherent to circular cone-beam tomography due to the fact that a cone-beam dataset acquired from circular scanning fails to meet the data sufficiency condition for volume reconstruction. Upon observations of the intensity fall-off phenomenon associated with the FDK reconstruction of a ball phantom, we propose an empirical weight formula to compensate for the fall-off degradation. Specifically, a reciprocal cosine can be used to compensate the voxel values along longitudinal direction during three-dimensional backprojection reconstruction, in particular for boosting the values of voxels at positions with large cone angles. The intensity degradation within the z plane, albeit insignificant, can also be compensated by using the same weight formula through a parameter for radial distance dependence. Computer simulations and phantom experiments are presented to demonstrate the compensation effectiveness of the fall-off effect inherent in circular cone-beam tomography.
Ilesanmi, A O; Adeleye, J A; Osotimehin, B O
1995-03-01
Infertility remains a medico-social problem in Nigeria and it accounts for a large percentage of outpatient gynecological consultations. The evaluation of the infertile couple remains a continuing challenge to the practising doctor in this part of the world. The need to evaluate the two methods commonly used for determining ovulation in these patients is indicated. Endometrial biopsy specimen and a single sample for serum progesterone estimation were obtained simultaneously in the luteal phase from 50 normally menstruating infertile Nigerian women. Subsequent analysis showed that a serum progesterone value of 6.6 nmol/l (2.2 ng/ml) or above was always associated with a secretory endometrium. Forty-six cycles yielded sufficient information to compare the two methods for confirmation of ovulation. Patients who ovulated with a progesterone value of 6.6 nmol/l (2.2 ng/ml) were 91.3% (42/46) or above, while 89% (41/46) showed secretory endometrium. Forty-six of the cases 86.9% (40/46) were judged to have ovulated by both parameters while 6.5% demonstrated anovulatory cycle using both criteria. From the study, a significant correlation was obtained between endometrial biopsy and progesterone assay methods in confirming ovulation.
Schur Stability Regions for Complex Quadratic Polynomials
ERIC Educational Resources Information Center
Cheng, Sui Sun; Huang, Shao Yuan
2010-01-01
Given a quadratic polynomial with complex coefficients, necessary and sufficient conditions are found in terms of the coefficients such that all its roots have absolute values less than 1. (Contains 3 figures.)
Value function in economic growth model
NASA Astrophysics Data System (ADS)
Bagno, Alexander; Tarasyev, Alexandr A.; Tarasyev, Alexander M.
2017-11-01
Properties of the value function are examined in an infinite horizon optimal control problem with an unlimited integrand index appearing in the quality functional with a discount factor. Optimal control problems of such type describe solutions in models of economic growth. Necessary and sufficient conditions are derived to ensure that the value function satisfies the infinitesimal stability properties. It is proved that value function coincides with the minimax solution of the Hamilton-Jacobi equation. Description of the growth asymptotic behavior for the value function is provided for the logarithmic, power and exponential quality functionals and an example is given to illustrate construction of the value function in economic growth models.
NASA Astrophysics Data System (ADS)
de la Beaujardiere, J.
2014-12-01
In February 2014, the US National Oceanic and Atmospheric Administration (NOAA) issued a Big Data Request for Information (RFI) from industry and other organizations (e.g., non-profits, research laboratories, and universities) to assess capability and interest in establishing partnerships to position a copy of NOAA's vast data holdings in the Cloud, co-located with easy and affordable access to analytical capabilities. This RFI was motivated by a number of concerns. First, NOAA's data facilities do not necessarily have sufficient network infrastructure to transmit all available observations and numerical model outputs to all potential users, or sufficient infrastructure to support simultaneous computation by many users. Second, the available data are distributed across multiple services and data facilities, making it difficult to find and integrate data for cross-domain analysis and decision-making. Third, large datasets require users to have substantial network, storage, and computing capabilities of their own in order to fully interact with and exploit the latent value of the data. Finally, there may be commercial opportunities for value-added products and services derived from our data. Putting a working copy of data in the Cloud outside of NOAA's internal networks and infrastructures should reduce demands and risks on our systems, and should enable users to interact with multiple datasets and create new lines of business (much like the industries built on government-furnished weather or GPS data). The NOAA Big Data RFI therefore solicited information on technical and business approaches regarding possible partnership(s) that -- at no net cost to the government and minimum impact on existing data facilities -- would unleash the commercial potential of its environmental observations and model outputs. NOAA would retain the master archival copy of its data. Commercial partners would not be permitted to charge fees for access to the NOAA data they receive, but would be able to develop and sell value-added products and services. This effort is still very much in the initial market research phase and has complexity in technical, business, and technical domains. This paper will discuss the current status of the activity and potential next steps.
Assays for the activities of polyamine biosynthetic enzymes using intact tissues
Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha
1999-01-01
Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...
Monitoring conservation success in a large oak woodland landscape
Rich Reiner; Emma Underwood; John-O Niles
2002-01-01
Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...
Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture
NASA Astrophysics Data System (ADS)
Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo
2018-05-01
The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
System for producing a uniform rubble bed for in situ processes
Galloway, T.R.
1983-07-05
A method and a cutter are disclosed for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head has a hollow body with a generally circular base and sloping upper surface. A hollow shaft extends from the hollow body. Cutter teeth are mounted on the upper surface of the body and relatively small holes are formed in the body between the cutter teeth. Relatively large peripheral flutes around the body allow material to drop below the drill head. A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale. 4 figs.
Questions Arising from the Assessment of EFL Narrative Writing
ERIC Educational Resources Information Center
Yi, Yong
2013-01-01
This article questions how narrative writing is assessed, seeking to understand what we test, what we value, and why. It uses a single anomalous case that arose in the course of my recent PhD thesis to highlight the issues, asking if sufficient attention is being given to the value of emotional content in a piece of writing in comparison to its…
ERIC Educational Resources Information Center
Topuzova, Lazarina N.
2009-01-01
Because child welfare workers serve the most vulnerable children and families, it is necessary that they have sufficient knowledge, skills, and values (competencies) to provide quality services. This study focuses on competencies that the Division of Child and Family Services, Utah (DCFS) views as essential for entry-level child welfare work, and…
Principles of Air Defense and Air Vehicle Penetration
2000-03-01
Range For reliable dateetien, the target signal must reach some minimum or threshold value called S . . When internal noise is the only interfer...analyze air defense and air vehicle penetration. Unique expected value models are developed with frequent numerical examples. Radar...penetrator in the presence of spurious returns from internal and external noise will be discussed. Tracking With sufficient sensor information to determine
Outterson, Kevin; McDonnell, Anthony
2016-05-01
A serious need to spur antibiotic innovation has arisen because of the lack of antibiotics to combat certain conditions and the overuse of other antibiotics leading to greater antibiotic resistance. In response to this need, proposals have been made to Congress to fund antibiotic research through a voucher program for new antibiotics, which would delay generic entry for any drug, even potential blockbuster lifesaving generics. We find this proposal to be inefficient, in part because of the mismatch between the private value of the voucher and the public value of the antibiotic innovation. However, vouchers have the political advantage in the United States of being able to raise sufficient amounts of money without annual appropriations from Congress. We propose that if antibiotic vouchers are to be considered, the design should include dollar and time caps to limit their volatility, sufficient advance notice to protect generic manufacturers, and market-based linkages between the value of the voucher and the value of the antibiotic innovation. We also explore a second option: The federal government could auction vouchers to the highest bidders and use the money to create an antibiotics innovation fund. Project HOPE—The People-to-People Health Foundation, Inc.
[Criterion Validity of the German Version of the CES-D in the General Population].
Jahn, Rebecca; Baumgartner, Josef S; van den Nest, Miriam; Friedrich, Fabian; Alexandrowicz, Rainer W; Wancata, Johannes
2018-04-17
The "Center of Epidemiologic Studies - Depression scale" (CES-D) is a well-known screening tool for depression. Until now the criterion validity of the German version of the CES-D was not investigated in a sample of the adult general population. 508 study participants of the Austrian general population completed the CES-D. ICD-10 diagnoses were established by using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN). Receiver Operating Characteristics (ROC) analysis was conducted. Possible gender differences were explored. Overall discriminating performance of the CES-D was sufficient (ROC-AUC 0,836). Using the traditional cut-off values of 15/16 and 21/22 respectively the sensitivity was 43.2 % and 32.4 %, respectively. The cut-off value developed on the basis of our sample was 9/10 with a sensitivity of 81.1 % und a specificity of 74.3 %. There were no significant gender differences. This is the first study investigating the criterion validity of the German version of the CES-D in the general population. The optimal cut-off values yielded sufficient sensitivity and specificity, comparable to the values of other screening tools. © Georg Thieme Verlag KG Stuttgart · New York.
Location-Control of Large Si Grains by Dual-Beam Excimer-Laser and Thick Oxide Portion
NASA Astrophysics Data System (ADS)
Ishihara, Ryoichi; Burtsev, Artyom; Alkemade, Paul F. A.
2000-07-01
An array of large Si grains was placed at a predetermined position by dual excimer-laser irradiation of a multi-layer structure of silicon (Si), silicon dioxide (SiO2) with an array of bumps and metal on a glass substrate. We have investigated the effects of irradiating energy density and the topology of the structure on the grain size and crystallographic structure by scanning electron microscopy (SEM) and electron back-scattering pattern (EBSP) analysis. In the low-energy-density regime, numerous small grains and petal shaped grains formed on top of the SiO2 bumps. The number of small grains on the bumps decreased with increasing irradiating energy density. At sufficiently high energy densities, one single Si grain as large as 3.5 μm was positioned at the center of the bumps. Although most of the area of the large Si grain has a single crystallographic orientation, twins and low-angle grain boundaries are often formed at the periphery of the grain. There was no preferred crystallographic orientation in the center of the location-controlled Si grain. Numerical analysis of the temperature profile showed that a temperature drop occurs at the center of the bump, during and immediately after laser irradiation. The diameter of the location-controlled Si grain increased with total thickness of the intermediate SiO2 layer, and took the maximum value of 6.2 μm.
36 CFR 218.21 - Emergency situations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... (NFS) lands for which immediate implementation of a decision is necessary to achieve one or more of the... resources on NFS or adjacent lands; avoiding a loss of commodity value sufficient to jeopardize the agency's...
36 CFR 218.21 - Emergency situations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... (NFS) lands for which immediate implementation of a decision is necessary to achieve one or more of the... resources on NFS or adjacent lands; avoiding a loss of commodity value sufficient to jeopardize the agency's...
Kiraz, Nuri; Oz, Yasemin; Aslan, Huseyin; Erturan, Zayre; Ener, Beyza; Akdagli, Sevtap Arikan; Muslumanoglu, Hamza; Cetinkaya, Zafer
2015-10-01
Although conventional identification of pathogenic fungi is based on the combination of tests evaluating their morphological and biochemical characteristics, they can fail to identify the less common species or the differentiation of closely related species. In addition these tests are time consuming, labour-intensive and require experienced personnel. We evaluated the feasibility and sufficiency of DNA extraction by Whatman FTA filter matrix technology and DNA sequencing of D1-D2 region of the large ribosomal subunit gene for identification of clinical isolates of 21 yeast and 160 moulds in our clinical mycology laboratory. While the yeast isolates were identified at species level with 100% homology, 102 (63.75%) clinically important mould isolates were identified at species level, 56 (35%) isolates at genus level against fungal sequences existing in DNA databases and two (1.25%) isolates could not be identified. Consequently, Whatman FTA filter matrix technology was a useful method for extraction of fungal DNA; extremely rapid, practical and successful. Sequence analysis strategy of D1-D2 region of the large ribosomal subunit gene was found considerably sufficient in identification to genus level for the most clinical fungi. However, the identification to species level and especially discrimination of closely related species may require additional analysis. © 2015 Blackwell Verlag GmbH.
Proton velocity ring-driven instabilities and their dependence on the ring speed: Linear theory
NASA Astrophysics Data System (ADS)
Min, Kyungguk; Liu, Kaijun; Gary, S. Peter
2017-08-01
Linear dispersion theory is used to study the Alfvén-cyclotron, mirror and ion Bernstein instabilities driven by a tenuous (1%) warm proton ring velocity distribution with a ring speed, vr, varying between 2vA and 10vA, where vA is the Alfvén speed. Relatively cool background protons and electrons are assumed. The modeled ring velocity distributions are unstable to both the Alfvén-cyclotron and ion Bernstein instabilities whose maximum growth rates are roughly a linear function of the ring speed. The mirror mode, which has real frequency ωr=0, becomes the fastest growing mode for sufficiently large vr/vA. The mirror and Bernstein instabilities have maximum growth at propagation oblique to the background magnetic field and become more field-aligned with an increasing ring speed. Considering its largest growth rate, the mirror mode, in addition to the Alfvén-cyclotron mode, can cause pitch angle diffusion of the ring protons when the ring speed becomes sufficiently large. Moreover, because the parallel phase speed, v∥ph, becomes sufficiently small relative to vr, the low-frequency Bernstein waves can also aid the pitch angle scattering of the ring protons for large vr. Potential implications of including these two instabilities at oblique propagation on heliospheric pickup ion dynamics are discussed.
Chen, Xiaofeng; Song, Qiankun; Li, Zhongshan; Zhao, Zhenjiang; Liu, Yurong
2018-07-01
This paper addresses the problem of stability for continuous-time and discrete-time quaternion-valued neural networks (QVNNs) with linear threshold neurons. Applying the semidiscretization technique to the continuous-time QVNNs, the discrete-time analogs are obtained, which preserve the dynamical characteristics of their continuous-time counterparts. Via the plural decomposition method of quaternion, homeomorphic mapping theorem, as well as Lyapunov theorem, some sufficient conditions on the existence, uniqueness, and global asymptotical stability of the equilibrium point are derived for the continuous-time QVNNs and their discrete-time analogs, respectively. Furthermore, a uniform sufficient condition on the existence, uniqueness, and global asymptotical stability of the equilibrium point is obtained for both continuous-time QVNNs and their discrete-time version. Finally, two numerical examples are provided to substantiate the effectiveness of the proposed results.
Gonzales, Gustavo F.; Tapia, Vilma; Fort, Alfredo L.
2012-01-01
Objective. To determine changes in hemoglobin concentration at second measurements after a normal hemoglobin concentration was detected at first booking during pregnancy at low and at high altitudes. Methods. This is a secondary analysis of a large database obtained from the Perinatal Information System in Peru which includes 379,816 pregnant women and their babies from 43 maternity units in Peru. Results. Most women remained with normal hemoglobin values at second measurement (75.1%). However, 21.4% of women became anemic at the second measurement. In all, 2.8% resulted with moderate/severe anemia and 3.5% with erythrocytosis (Hb>14.5 g/dL). In all cases Hb was higher as altitude increased. Risk for moderate/severe anemia increased associated with higher gestational age at second measurement of hemoglobin, BMI <19.9 kg/m2, living without partner, <5 antenatal care visits, first parity, multiparity, and preeclampsia. Lower risk for moderate/severe anemia was observed with normal high Hb level at first booking living at moderate and high altitude, and high BMI. Conclusion. Prevalence of anemia increases as pregnancy progress, and that a normal value at first booking may not be considered sufficient as Hb values should be observed throughout pregnancy. BMI was a risk for anemia in a second measurement. PMID:22577573
Gonzales, Gustavo F; Tapia, Vilma; Fort, Alfredo L
2012-01-01
Objective. To determine changes in hemoglobin concentration at second measurements after a normal hemoglobin concentration was detected at first booking during pregnancy at low and at high altitudes. Methods. This is a secondary analysis of a large database obtained from the Perinatal Information System in Peru which includes 379,816 pregnant women and their babies from 43 maternity units in Peru. Results. Most women remained with normal hemoglobin values at second measurement (75.1%). However, 21.4% of women became anemic at the second measurement. In all, 2.8% resulted with moderate/severe anemia and 3.5% with erythrocytosis (Hb>14.5 g/dL). In all cases Hb was higher as altitude increased. Risk for moderate/severe anemia increased associated with higher gestational age at second measurement of hemoglobin, BMI <19.9 kg/m(2), living without partner, <5 antenatal care visits, first parity, multiparity, and preeclampsia. Lower risk for moderate/severe anemia was observed with normal high Hb level at first booking living at moderate and high altitude, and high BMI. Conclusion. Prevalence of anemia increases as pregnancy progress, and that a normal value at first booking may not be considered sufficient as Hb values should be observed throughout pregnancy. BMI was a risk for anemia in a second measurement.
Thermodynamic scaling of the shear viscosity of Mie n-6 fluids and their binary mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delage-Santacreu, Stephanie; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr; Hoang, Hai
2015-05-07
In this work, we have evaluated the applicability of the so-called thermodynamic scaling and the isomorph frame to describe the shear viscosity of Mie n-6 fluids of varying repulsive exponents (n = 8, 12, 18, 24, and 36). Furthermore, the effectiveness of the thermodynamic scaling to deal with binary mixtures of Mie n-6 fluids has been explored as well. To generate the viscosity database of these fluids, extensive non-equilibrium molecular dynamics simulations have been performed for various thermodynamic conditions. Then, a systematic approach has been used to determine the gamma exponent value (γ) characteristic of the thermodynamic scaling approach formore » each system. In addition, the applicability of the isomorph theory with a density dependent gamma has been confirmed in pure fluids. In both pure fluids and mixtures, it has been found that the thermodynamic scaling with a constant gamma is sufficient to correlate the viscosity data on a large range of thermodynamic conditions covering liquid and supercritical states as long as the density is not too high. Interestingly, it has been obtained that, in pure fluids, the value of γ is directly proportional to the repulsive exponent of the Mie potential. Finally, it has been found that the value of γ in mixtures can be deduced from those of the pure component using a simple logarithmic mixing rule.« less
NASA Astrophysics Data System (ADS)
Melnikov, Andrey; Ogden, Ray W.
2018-06-01
This paper is concerned with the bifurcation analysis of a pressurized electroelastic circular cylindrical tube with closed ends and compliant electrodes on its curved boundaries. The theory of small incremental electroelastic deformations superimposed on a finitely deformed electroelastic tube is used to determine those underlying configurations for which the superimposed deformations do not maintain the perfect cylindrical shape of the tube. First, prismatic bifurcations are examined and solutions are obtained which show that for a neo-Hookean electroelastic material prismatic modes of bifurcation become possible under inflation. This result contrasts with that for the purely elastic case for which prismatic bifurcation modes were found only for an externally pressurized tube. Second, axisymmetric bifurcations are analyzed, and results for both neo-Hookean and Mooney-Rivlin electroelastic energy functions are obtained. The solutions show that in the presence of a moderate electric field the electroelastic tube becomes more susceptible to bifurcation, i.e., for fixed values of the axial stretch axisymmetric bifurcations become possible at lower values of the circumferential stretches than in the corresponding problems in the absence of an electric field. As the magnitude of the electric field increases, however, the possibility of bifurcation under internal pressure becomes restricted to a limited range of values of the axial stretch and is phased out completely for sufficiently large electric fields. Then, axisymmetric bifurcation is only possible under external pressure.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Statistical aspects of modeling the labor curve.
Zhang, Jun; Troendle, James; Grantz, Katherine L; Reddy, Uma M
2015-06-01
In a recent review by Cohen and Friedman, several statistical questions on modeling labor curves were raised. This article illustrates that asking data to fit a preconceived model or letting a sufficiently flexible model fit observed data is the main difference in principles of statistical modeling between the original Friedman curve and our average labor curve. An evidence-based approach to construct a labor curve and establish normal values should allow the statistical model to fit observed data. In addition, the presence of the deceleration phase in the active phase of an average labor curve was questioned. Forcing a deceleration phase to be part of the labor curve may have artificially raised the speed of progression in the active phase with a particularly large impact on earlier labor between 4 and 6 cm. Finally, any labor curve is illustrative and may not be instructive in managing labor because of variations in individual labor pattern and large errors in measuring cervical dilation. With the tools commonly available, it may be more productive to establish a new partogram that takes the physiology of labor and contemporary obstetric population into account. Copyright © 2015 Elsevier Inc. All rights reserved.
Particle number dependence in the non-linear evolution of N-body self-gravitating systems
NASA Astrophysics Data System (ADS)
Benhaiem, D.; Joyce, M.; Sylos Labini, F.; Worrakitpoonpon, T.
2018-01-01
Simulations of purely self-gravitating N-body systems are often used in astrophysics and cosmology to study the collisionless limit of such systems. Their results for macroscopic quantities should then converge well for sufficiently large N. Using a study of the evolution from a simple space of spherical initial conditions - including a region characterized by so-called 'radial orbit instability' - we illustrate that the values of N at which such convergence is obtained can vary enormously. In the family of initial conditions we study, good convergence can be obtained up to a few dynamical times with N ∼ 103 - just large enough to suppress two body relaxation - for certain initial conditions, while in other cases such convergence is not attained at this time even in our largest simulations with N ∼ 105. The qualitative difference is due to the stability properties of fluctuations introduced by the N-body discretisation, of which the initial amplitude depends on N. We discuss briefly why the crucial role which such fluctuations can potentially play in the evolution of the N body system could, in particular, constitute a serious problem in cosmological simulations of dark matter.
NASA Astrophysics Data System (ADS)
Matin, M.; Mondal, Rajib; Barman, N.; Thamizhavel, A.; Dhar, S. K.
2018-05-01
Here, we report an extremely large positive magnetoresistance (XMR) in a single-crystal sample of MoSi2, approaching almost 107% at 2 K in a 14-T magnetic field without appreciable saturation. Hall resistivity data reveal an uncompensated nature of MoSi2 with an electron-hole compensation level sufficient enough to expect strong saturation of magnetoresistance in the high-field regime. Magnetotransport and the complementary de Haas-van Alphen (dHvA) oscillations results, however, suggest that strong Zeeman effect causes a magnetic field-induced modulation of the Fermi pockets and drives the system towards perfect electron-hole compensation condition in the high-field regime. Thus, the nonsaturating XMR of this semimetal arises under the unconventional situation of Zeeman effect-driven electron-hole compensation, whereas its huge magnitude is decided solely by the ultralarge value of the carrier mobility. Intrinsic ultralarge carrier mobility, strong suppression of backward scattering of the charge carriers, and nontrivial Berry phase in dHvA oscillations attest to the topological character of MoSi2. Therefore, this semimetal represents another material hosting combination of topological and conventional electronic phases.
Adding the missing piece: Spitzer imaging of the HSC-Deep/PFS fields
NASA Astrophysics Data System (ADS)
Sajina, Anna; Bezanson, Rachel; Capak, Peter; Egami, Eiichi; Fan, Xiaohui; Farrah, Duncan; Greene, Jenny; Goulding, Andy; Lacy, Mark; Lin, Yen-Ting; Liu, Xin; Marchesini, Danilo; Moutard, Thibaud; Ono, Yoshiaki; Ouchi, Masami; Sawicki, Marcin; Strauss, Michael; Surace, Jason; Whitaker, Katherine
2018-05-01
We propose to observe a total of 7sq.deg. to complete the Spitzer-IRAC coverage of the HSC-Deep survey fields. These fields are the sites of the PrimeFocusSpectrograph (PFS) galaxy evolution survey which will provide spectra of wide wavelength range and resolution for almost all M* galaxies at z 0.7-1.7, and extend out to z 7 for targeted samples. Our fields already have deep broadband and narrowband photometry in 12 bands spanning from u through K and a wealth of other ancillary data. We propose completing the matching depth IRAC observations in the extended COSMOS, ELAIS-N1 and Deep2-3 fields. By complementing existing Spitzer coverage, this program will lead to an unprecedended in spectro-photometric coverage dataset across a total of 15 sq.deg. This dataset will have significant legacy value as it samples a large enough cosmic volume to be representative of the full range of environments, but also doing so with sufficient information content per galaxy to confidently derive stellar population characteristics. This enables detailed studies of the growth and quenching of galaxies and their supermassive black holes in the context of a galaxy's local and large scale environment.
Interaction of monopoles, dipoles, and turbulence with a shear flow
NASA Astrophysics Data System (ADS)
Marques Rosas Fernandes, V. H.; Kamp, L. P. J.; van Heijst, G. J. F.; Clercx, H. J. H.
2016-09-01
Direct numerical simulations have been conducted to examine the evolution of eddies in the presence of large-scale shear flows. The numerical experiments consist of initial-value-problems in which monopolar and dipolar vortices as well as driven turbulence are superposed on a plane Couette or Poiseuille flow in a periodic two-dimensional channel. The evolution of the flow has been examined for different shear rates of the background flow and different widths of the channel. Results found for retro-grade and pro-grade monopolar vortices are consistent with those found in the literature. Boundary layer vorticity, however, can significantly modify the straining and erosion of monopolar vortices normally seen for unbounded domains. Dipolar vortices are shown to be much more robust coherent structures in a large-scale shear flow than monopolar eddies. An analytical model for their trajectories, which are determined by self-advection and advection and rotation by the shear flow, is presented. Turbulent kinetic energy is effectively suppressed by the shearing action of the background flow provided that the shear is linear (Couette flow) and of sufficient strength. Nonlinear shear as present in the Poiseuille flow seems to even increase the turbulence strength especially for high shear rates.
Spicher, G; Peters, J
1997-02-01
Biological indicators used to test sterilisation procedures for their efficacy consist of a so-called germ carrier to which the microorganisms used as test organisms adhere. In previous papers we demonstrated that carriers made of filter paper on contact with saturated steam show superheating while carriers made of glass fibre fleece as well as wetted filter paper do not. Using spores of Bacillus subtilis and Bacillus stearothermophilus as test organisms we have now investigated whether and to what extent carrier superheating affects the characteristic values (t50%) of these biological indicators. The indicators were exposed to saturated steam at 100 degrees C (B. subtilis) or 120 degrees C (B. stearothermophilus) under three different exposure conditions: 1. dry (i.e. conditioned to 45% relative humidity before introduction into the sterilising chamber), freely accessible; 2. dry with a substratum and a cover of filter card-board; 3. wet (moistened with twice distilled water before introduction into the sterilising chamber), freely accessible. For previously selected exposure periods, the incidence of indicators with surviving test organisms was determined. The reaction pattern of bioindicators with spores of B. stearothermophilus was different from that of bioindicators with spores of B. subtilis. For B. subtilis, the incidence of bioindicators exhibiting surviving test organisms depended on the nature of the carries as well as on the exposure conditions. On filter paper carriers, t50% increased in the order "wet, freely accessible", "dry, freely accessible", "dry, between filter card-board". On dry and wetted glass fibre fleece, resistance was approximately the same; when the indicators were sandwiched between layers of filter card-board, t50% increased. For B. stearothermophilus, t50% was largely dependent on the carrier material alone. The values obtained for filter paper were invariably much lower than those for glass fibre fleece. As the results show, using spores of B. subtilis it is possible to detect superheating, but the steam resistance of the spores is relatively low. Spores of B. stearothermophilus are of high steam resistance but they are practically unsuitable for detecting superheating. It is imperative to search for a test organism the resistance of which against steam is sufficiently high and which at the same time is capable of reacting to superheating (equivalent to reduced humidity) by a sufficiently large increase in resistance.
Yap, Choon-Kong; Eisenhaber, Birgit; Eisenhaber, Frank; Wong, Wing-Cheong
2016-11-29
While the local-mode HMMER3 is notable for its massive speed improvement, the slower glocal-mode HMMER2 is more exact for domain annotation by enforcing full domain-to-sequence alignments. Since a unit of domain necessarily implies a unit of function, local-mode HMMER3 alone remains insufficient for precise function annotation tasks. In addition, the incomparable E-values for the same domain model by different HMMER builds create difficulty when checking for domain annotation consistency on a large-scale basis. In this work, both the speed of HMMER3 and glocal-mode alignment of HMMER2 are combined within the xHMMER3x2 framework for tackling the large-scale domain annotation task. Briefly, HMMER3 is utilized for initial domain detection so that HMMER2 can subsequently perform the glocal-mode, sequence-to-full-domain alignments for the detected HMMER3 hits. An E-value calibration procedure is required to ensure that the search space by HMMER2 is sufficiently replicated by HMMER3. We find that the latter is straightforwardly possible for ~80% of the models in the Pfam domain library (release 29). However in the case of the remaining ~20% of HMMER3 domain models, the respective HMMER2 counterparts are more sensitive. Thus, HMMER3 searches alone are insufficient to ensure sensitivity and a HMMER2-based search needs to be initiated. When tested on the set of UniProt human sequences, xHMMER3x2 can be configured to be between 7× and 201× faster than HMMER2, but with descending domain detection sensitivity from 99.8 to 95.7% with respect to HMMER2 alone; HMMER3's sensitivity was 95.7%. At extremes, xHMMER3x2 is either the slow glocal-mode HMMER2 or the fast HMMER3 with glocal-mode. Finally, the E-values to false-positive rates (FPR) mapping by xHMMER3x2 allows E-values of different model builds to be compared, so that any annotation discrepancies in a large-scale annotation exercise can be flagged for further examination by dissectHMMER. The xHMMER3x2 workflow allows large-scale domain annotation speed to be drastically improved over HMMER2 without compromising for domain-detection with regard to sensitivity and sequence-to-domain alignment incompleteness. The xHMMER3x2 code and its webserver (for Pfam release 27, 28 and 29) are freely available at http://xhmmer3x2.bii.a-star.edu.sg/ . Reviewed by Thomas Dandekar, L. Aravind, Oliviero Carugo and Shamil Sunyaev. For the full reviews, please go to the Reviewers' comments section.
Weber, Frank; Geerts, Noortje J E; Roeleveld, Hilde G; Warmenhoven, Annejet T; Liebrand, Chantal A
2018-05-13
The heart rate variability (HRV) derived Analgesia Nociception Index (ANI ™ ) is a continuous non-invasive tool to assess the nociception/anti-nociception balance in unconscious patients. It has been shown to be superior to hemodynamic variables in detecting insufficient anti-nociception in children, while little is known about its predictive value. The primary objective of this prospective observational pilot study in paediatric surgical patients under sevoflurane anaesthesia, was to compare the predictive value of the ANI and heart rate to help decide to give additional opioids. The paediatric anaesthesiologist in charge was blinded to ANI values. In patients with an ANI value <50 (indicating insufficient anti-nociception) at the moment of decision, ANI values dropped from ±55 (indicating sufficient anti-nociception) to ±35, starting 60 sec. before decision. Within 120 sec. after administration of fentanyl (1 mcg/kg), ANI values returned to ±60. This phenomenon was only observed in the ANI values derived from HRV data averaged over 2 min. Heart rate remained unchanged. In patients with ANI values ≥50 at the time of decision, opioid administration had no effect on ANI or heart rate. The same accounts for morphine for postoperative analgesia and fentanyl in case of intraoperative movement. This study provides evidence of a better predictive value of the ANI in detecting insufficient anti-nociception in paediatric surgical patients than heart rate. The same accounts for depicting re-establishment of sufficient anti-nociception after opioid drug administration. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Zimmermann, Michael B; Hess, Sonja Y; Molinari, Luciano; De Benoist, Bruno; Delange, François; Braverman, Lewis E; Fujieda, Kenji; Ito, Yoshiya; Jooste, Pieter L; Moosa, Khairya; Pearce, Elizabeth N; Pretell, Eduardo A; Shishiba, Yoshimasa
2004-02-01
Goiter prevalence in school-age children is an indicator of the severity of iodine deficiency disorders (IDDs) in a population. In areas of mild-to-moderate IDDs, measurement of thyroid volume (Tvol) by ultrasound is preferable to palpation for grading goiter, but interpretation requires reference criteria from iodine-sufficient children. The study aim was to establish international reference values for Tvol by ultrasound in 6-12-y-old children that could be used to define goiter in the context of IDD monitoring. Tvol was measured by ultrasound in 6-12-y-old children living in areas of long-term iodine sufficiency in North and South America, central Europe, the eastern Mediterranean, Africa, and the western Pacific. Measurements were made by 2 experienced examiners using validated techniques. Data were log transformed, used to calculate percentiles on the basis of the Gaussian distribution, and then transformed back to the linear scale. Age- and body surface area (BSA)-specific 97th percentiles for Tvol were calculated for boys and girls. The sample included 3529 children evenly divided between boys and girls at each year ( +/- SD age: 9.3 +/- 1.9 y). The range of median urinary iodine concentrations for the 6 study sites was 118-288 micro g/L. There were significant differences in age- and BSA-adjusted mean Tvols between sites, which suggests that population-specific references in countries with long-standing iodine sufficiency may be more accurate than is a single international reference. However, overall differences in age- and BSA-adjusted Tvols between sites were modest relative to the population and measurement variability, which supports the use of a single, site-independent set of references. These new international reference values for Tvol by ultrasound can be used for goiter screening in the context of IDD monitoring.
Evaluation of Existing Methods for Human Blood mRNA Isolation and Analysis for Large Studies
Meyer, Anke; Paroni, Federico; Günther, Kathrin; Dharmadhikari, Gitanjali; Ahrens, Wolfgang; Kelm, Sørge; Maedler, Kathrin
2016-01-01
Aims Prior to implementing gene expression analyses from blood to a larger cohort study, an evaluation to set up a reliable and reproducible method is mandatory but challenging due to the specific characteristics of the samples as well as their collection methods. In this pilot study we optimized a combination of blood sampling and RNA isolation methods and present reproducible gene expression results from human blood samples. Methods The established PAXgeneTM blood collection method (Qiagen) was compared with the more recent TempusTM collection and storing system. RNA from blood samples collected by both systems was extracted on columns with the corresponding Norgen and PAX RNA extraction Kits. RNA quantity and quality was compared photometrically, with Ribogreen and by Real-Time PCR analyses of various reference genes (PPIA, β-ACTIN and TUBULIN) and exemplary of SIGLEC-7. Results Combining different sampling methods and extraction kits caused strong variations in gene expression. The use of PAXgeneTM and TempusTM collection systems resulted in RNA of good quality and quantity for the respective RNA isolation system. No large inter-donor variations could be detected for both systems. However, it was not possible to extract sufficient RNA of good quality with the PAXgeneTM RNA extraction system from samples collected by TempusTM collection tubes. Comparing only the Norgen RNA extraction methods, RNA from blood collected either by the TempusTM or PAXgeneTM collection system delivered sufficient amount and quality of RNA, but the TempusTM collection delivered higher RNA concentration compared to the PAXTM collection system. The established Pre-analytix PAXgeneTM RNA extraction system together with the PAXgeneTM blood collection system showed lowest CT-values, i.e. highest RNA concentration of good quality. Expression levels of all tested genes were stable and reproducible. Conclusions This study confirms that it is not possible to mix or change sampling or extraction strategies during the same study because of large variations of RNA yield and expression levels. PMID:27575051
Fast sparsely synchronized brain rhythms in a scale-free neural network
NASA Astrophysics Data System (ADS)
Kim, Sang-Yoon; Lim, Woochang
2015-08-01
We consider a directed version of the Barabási-Albert scale-free network model with symmetric preferential attachment with the same in- and out-degrees and study the emergence of sparsely synchronized rhythms for a fixed attachment degree in an inhibitory population of fast-spiking Izhikevich interneurons. Fast sparsely synchronized rhythms with stochastic and intermittent neuronal discharges are found to appear for large values of J (synaptic inhibition strength) and D (noise intensity). For an intensive study we fix J at a sufficiently large value and investigate the population states by increasing D . For small D , full synchronization with the same population-rhythm frequency fp and mean firing rate (MFR) fi of individual neurons occurs, while for large D partial synchronization with fp>
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Towards Stability Analysis of Jump Linear Systems with State-Dependent and Stochastic Switching
NASA Technical Reports Server (NTRS)
Tejada, Arturo; Gonzalez, Oscar R.; Gray, W. Steven
2004-01-01
This paper analyzes the stability of hierarchical jump linear systems where the supervisor is driven by a Markovian stochastic process and by the values of the supervised jump linear system s states. The stability framework for this class of systems is developed over infinite and finite time horizons. The framework is then used to derive sufficient stability conditions for a specific class of hybrid jump linear systems with performance supervision. New sufficient stochastic stability conditions for discrete-time jump linear systems are also presented.
NASA Technical Reports Server (NTRS)
Jensen, M. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. A significant and possible major economic example of the practical value of Skylab photographs was provided by locating on Skylab Camera Station Number 4, frame 010, SL-2, an area of exposures of limestone rocks which were thought to be completely covered by volcanic rocks based upon prior mapping. The area is located less than 12 miles north of the Ruth porphyry copper deposit, White Pine County, Nevada. This is a major copper producing open pit mine owned by Kennecott Copper Corporation. Geophysical maps consisting of gravity and aeromagnetic studies have been published indicating three large positive magnetic anomalies located at the Ruth ore deposits, the Ward Mountain, not a mineralized area, and in the area previously thought to be completely covered by post-ore volcanics. Skylab photos indicate, however, that erosion has removed volcanic cover in specific sites sufficient to expose the underlying older rocks suggesting, therefore, that the volcanic rocks may not be the cause of the aeromagnetic anomaly. Field studies have verified the initial interpretations made from the Skylab photos. The potential significance of this study is that the large positive aeromagnetic anomaly suggests the presence of cooled and solidified magma below the anomalies, in which ore-bearing solutions may have been derived forming possible large ore deposits.
Brader, J M; Siebenbürger, M; Ballauff, M; Reinheimer, K; Wilhelm, M; Frey, S J; Weysser, F; Fuchs, M
2010-12-01
Using a combination of theory, experiment, and simulation we investigate the nonlinear response of dense colloidal suspensions to large amplitude oscillatory shear flow. The time-dependent stress response is calculated using a recently developed schematic mode-coupling-type theory describing colloidal suspensions under externally applied flow. For finite strain amplitudes the theory generates a nonlinear response, characterized by significant higher harmonic contributions. An important feature of the theory is the prediction of an ideal glass transition at sufficiently strong coupling, which is accompanied by the discontinuous appearance of a dynamic yield stress. For the oscillatory shear flow under consideration we find that the yield stress plays an important role in determining the nonlinearity of the time-dependent stress response. Our theoretical findings are strongly supported by both large amplitude oscillatory experiments (with Fourier transform rheology analysis) on suspensions of thermosensitive core-shell particles dispersed in water and Brownian dynamics simulations performed on a two-dimensional binary hard-disk mixture. In particular, theory predicts nontrivial values of the exponents governing the final decay of the storage and loss moduli as a function of strain amplitude which are in good agreement with both simulation and experiment. A consistent set of parameters in the presented schematic model achieves to jointly describe linear moduli, nonlinear flow curves, and large amplitude oscillatory spectroscopy.
NASA Astrophysics Data System (ADS)
Lopez, Ana; Fung, Fai; New, Mark; Watts, Glenn; Weston, Alan; Wilby, Robert L.
2009-08-01
The majority of climate change impacts and adaptation studies so far have been based on at most a few deterministic realizations of future climate, usually representing different emissions scenarios. Large ensembles of climate models are increasingly available either as ensembles of opportunity or perturbed physics ensembles, providing a wealth of additional data that is potentially useful for improving adaptation strategies to climate change. Because of the novelty of this ensemble information, there is little previous experience of practical applications or of the added value of this information for impacts and adaptation decision making. This paper evaluates the value of perturbed physics ensembles of climate models for understanding and planning public water supply under climate change. We deliberately select water resource models that are already used by water supply companies and regulators on the assumption that uptake of information from large ensembles of climate models will be more likely if it does not involve significant investment in new modeling tools and methods. We illustrate the methods with a case study on the Wimbleball water resource zone in the southwest of England. This zone is sufficiently simple to demonstrate the utility of the approach but with enough complexity to allow a variety of different decisions to be made. Our research shows that the additional information contained in the climate model ensemble provides a better understanding of the possible ranges of future conditions, compared to the use of single-model scenarios. Furthermore, with careful presentation, decision makers will find the results from large ensembles of models more accessible and be able to more easily compare the merits of different management options and the timing of different adaptation. The overhead in additional time and expertise for carrying out the impacts analysis will be justified by the increased quality of the decision-making process. We remark that even though we have focused our study on a water resource system in the United Kingdom, our conclusions about the added value of climate model ensembles in guiding adaptation decisions can be generalized to other sectors and geographical regions.
NASA Astrophysics Data System (ADS)
Zhuravlev, V. V.; Ivanov, P. B.
2011-08-01
In this paper we derive equations describing the dynamics and stationary configurations of a twisted fully relativistic thin accretion disc around a slowly rotating black hole. We assume that the inclination angle of the disc is small and that the standard relativistic generalization of the α model of accretion discs is valid when the disc is flat. We find that similar to the case of non-relativistic twisted discs the disc dynamics and stationary shapes can be determined by a pair of equations formulated for two complex variables describing the orientation of the disc rings and velocity perturbations induced by the twist. We analyse analytically and numerically the shapes of stationary twisted configurations of accretion discs having non-zero inclinations with respect to the black hole equatorial plane at large distances r from the black hole. It is shown that the stationary configurations depend on two parameters - the viscosity parameter α and the parameter ?, where δ* is the opening angle (δ*˜h/r, where h is the disc half-thickness and r is large) of a flat disc and a is the black hole rotational parameter. When a > 0 and ? the shapes depend drastically on the value of α. When α is small the disc inclination angle oscillates with radius with amplitude and radial frequency of the oscillations dramatically increasing towards the last stable orbit, Rms. When α has a moderately small value the oscillations do not take place but the disc does not align with the equatorial plane at small radii. The disc inclination angle either is increasing towards Rms or exhibits a non-monotonic dependence on the radial coordinate. Finally, when α is sufficiently large the disc aligns with the equatorial plane at small radii. When a < 0 the disc aligns with the equatorial plane for all values of α. The results reported here may have implications for determining the structure and variability of accretion discs close to Rms as well as for modelling of emission spectra coming from different sources, which are supposed to contain black holes.
NASA Astrophysics Data System (ADS)
Qin, Jianqi; Celestin, Sebastien; Pasko, Victor P.
2013-05-01
Carrot sprites, exhibiting both upward and downward propagating streamers, and columniform sprites, characterized by predominantly vertical downward streamers, represent two distinct morphological classes of lightning-driven transient luminous events in the upper atmosphere. It is found that positive cloud-to-ground lightning discharges (+CGs) associated with large charge moment changes (QhQ) tend to produce carrot sprites with the presence of a mesospheric region where the electric field exceeds the value 0.8Ek and persists for
The microwave radiometer spacecraft: A design study
NASA Technical Reports Server (NTRS)
Wright, R. L. (Editor)
1981-01-01
A large passive microwave radiometer spacecraft with near all weather capability of monitoring soil moisture for global crop forecasting was designed. The design, emphasizing large space structures technology, characterized the mission hardware at the conceptual level in sufficient detail to identify enabling and pacing technologies. Mission and spacecraft requirements, design and structural concepts, electromagnetic concepts, and control concepts are addressed.
ERIC Educational Resources Information Center
Bowman, Thomas G.
2012-01-01
The athletic training profession is in the midst of a large increase in demand for health care professionals for the physically active. In order to meet demand, directors of athletic training education programs (ATEPs) are challenged with providing sufficient graduates. There has been a large increase in ATEPs nationwide since educational reform…
Entropy production during an isothermal phase transition in the early universe
NASA Astrophysics Data System (ADS)
Kaempfer, B.
The analytical model of Lodenquai and Dixit (1983) and of Bonometto and Matarrese (1983) of an isothermal era in the early universe is extended here to arbitrary temperatures. It is found that a sufficiently large supercooling gives rise to a large entropy production which may significantly dilute the primordial monopole or baryon to entropy ratio. Whether such large supercooling can be achieved depends on the characteristics of the nucleation process.
Synchronization of fractional-order complex-valued neural networks with time delay.
Bao, Haibo; Park, Ju H; Cao, Jinde
2016-09-01
This paper deals with the problem of synchronization of fractional-order complex-valued neural networks with time delays. By means of linear delay feedback control and a fractional-order inequality, sufficient conditions are obtained to guarantee the synchronization of the drive-response systems. Numerical simulations are provided to show the effectiveness of the obtained results. Copyright © 2016 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Arthur, James; Carr, David
2013-01-01
This article has three broad aims. The first is to draw attention what is probably the largest empirical study of moral, values and character education in the United Kingdom to the present date. The second is to outline--sufficient for present purposes--a plausible conceptual or theoretical case for placing a particular virtue-ethical concept of…
Handheld Synthetic Array Final Report, Part A
2014-12-01
Measurement Unit 4/143 IEEE Institute of Electrical and Electronics Engineers KF Kalman Filter KL Kullback - Leibler LAMBDA Least-squares... testing the algorithms for the LOS AN wireless beamforming. Given a good set of feature points, the ego-motion is sufficiently accurate to... of little value to the overall SLAM and the RSS observables are used instead. While individual RSS measurements are low in information value, the
Implications for the Daily Variation and the Low Value of Thermal Inertia at Arabia Terra on Mars
NASA Astrophysics Data System (ADS)
Toyota, T.; Saruya, T.; Kurita, K.
2010-12-01
Active nature of the Martian surface is considered to be responsible for various styles of the atmosphere-surface interaction. Here, we propose an idea to interpret the daily variation and the low value of thermal inertia at Arabia Terra on Mars. Thermal inertia calculated with the surface temperature obtained by remote sensing exhibits daily variation and seasonal variation. Putzig and Mellon [1] suggested that horizontal or vertical heterogeneity may yield apparent thermal inertia which varies with time of day and season. However, their interpretation couldn’t completely explain the extent and the phase of the temporal variation of thermal inertia at Arabia Terra. We would like to propose another possibility to explain the characteristics of the thermal inertia at Arabia Terra. In addition, the value of thermal inertia is extremely low at Arabia Terra. Daytime thermal inertia at Arabia Terra is as low as 20 tiu [1,2], which is lower than the value of thermal inertia of 1 micron dust aggregates ( 61 tiu [3]). To explain these characteristics of Arabia Terra, we proposed an idea that condensation and sublimation of water ice at the granular surface cause the daily variation and the low value of the thermal inertia at Arabia Terra. At nighttime, water vapor condenses at the surface. Immediately after sunrise, water ice at the surface sublimates. Electric force and sublimating gas pressure could affect the porosity of the surface. We suppose that the daily variation of the thermal inertia is caused by presence of deposition/removal of water ice and the low value of the thermal inertia is caused by the higher value of the bulk porosity than random close packing. To substantiate the above model, there remain four main questions to be answered. 1) Is there sufficient water vapor at the atmosphere above Arabia Terra?, 2) Does the sufficient amount of water condense at the surface during the night?, 3) Can water vapor and other factors make the surface porosity higher? and 4) How much does the higher value of the porosity make the bulk thermal inertia lower? We investigated previous studies for question 1) and performed a numerical simulation for the sublimation/condensation of water ice for question 2). We also performed laboratory experiments to investigate question 3) and 4). We obtained results which showed 1) There are sufficient water vapor at the atmosphere above Arabia Terra, 2) It is difficult for the sufficient amount of water vapor to condense at the surface during the night in our numerical model with limited parameters, 3) Condensation/sublimation of water ice and other mechanical effects could affect the bulk porosity at the surface, and 4) The high value of the porosity make the bulk thermal inertia lower by factor of two. References [1] N. E. Putzig and M. T. Mellon, Icarus 191, 68 (2007). [2] T. Saruya, T. Toyota, D. Baratoux, and K. Kurita, 41th LPSC, 1306 (2010) [3] M. T. Mellon, R. L. Fergason, and N. E. Putzig, The Martian Surface, Cambridge University Press. (2008). [4] M. A. Presley1 and R. A. Craddock, Jour. Geophys. Res. 111, E09013 (2006).
Detached Bridgman Growth of Germanium and Germanium-Silicon Alloy Crystals
NASA Technical Reports Server (NTRS)
Szofran, F. R.; Volz, M. P.; Schweizer, M.; Cobb, S. D.; Motakef, S.; Croell, A.; Dold, P.; Curreri, Peter A. (Technical Monitor)
2002-01-01
Earth based experiments on the science of detached crystal growth are being conducted on germanium and germanium-silicon alloys (2 at% Si average composition) in preparation for a series of experiments aboard the International Space Station (ISS). The purpose of the microgravity experiments includes differentiating among proposed mechanisms contributing to detachment, and confirming or refining our understanding of the detachment mechanism. Because large contact angle are critical to detachment, sessile drop measurements were used to determine the contact angles as a function of temperature and composition for a large number of substrates made of potential ampoule materials. Growth experiments have used pyrolytic boron nitride (pBN) and fused silica ampoules with the majority of the detached results occurring predictably in the pBN. The contact angles were 173 deg (Ge) and 165 deg (GeSi) for pBN. For fused silica, the contact angle decreases from 150 deg to an equilibrium value of 117 deg (Ge) or from 129 deg to an equilibrium value of 100 deg (GeSi) over the duration of the experiment. The nature and extent of detachment is determined by using profilometry in conjunction with optical and electron microscopy. The stability of detachment has been analyzed, and an empirical model for the conditions necessary to achieve sufficient stability to maintain detached growth for extended periods has been developed. Results in this presentation will show that we have established the effects on detachment of ampoule material, pressure difference above and below the melt, and silicon concentration; samples that are nearly completely detached can be grown repeatedly in pBN.
Fractal analysis of lateral movement in biomembranes.
Gmachowski, Lech
2018-04-01
Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.
Process for extracting technetium from alkaline solutions
Moyer, Bruce A.; Sachleben, Richard A.; Bonnesen, Peter V.
1995-01-01
A process for extracting technetium values from an aqueous alkaline solution containing at least one alkali metal hydroxide and at least one alkali metal nitrate, the at least one alkali metal nitrate having a concentration of from about 0.1 to 6 molar. The solution is contacted with a solvent consisting of a crown ether in a diluent for a period of time sufficient to selectively extract the technetium values from the aqueous alkaline solution. The solvent containing the technetium values is separated from the aqueous alkaline solution and the technetium values are stripped from the solvent.
Velmurugan, G; Rakkiyappan, R; Vembarasan, V; Cao, Jinde; Alsaedi, Ahmed
2017-02-01
As we know, the notion of dissipativity is an important dynamical property of neural networks. Thus, the analysis of dissipativity of neural networks with time delay is becoming more and more important in the research field. In this paper, the authors establish a class of fractional-order complex-valued neural networks (FCVNNs) with time delay, and intensively study the problem of dissipativity, as well as global asymptotic stability of the considered FCVNNs with time delay. Based on the fractional Halanay inequality and suitable Lyapunov functions, some new sufficient conditions are obtained that guarantee the dissipativity of FCVNNs with time delay. Moreover, some sufficient conditions are derived in order to ensure the global asymptotic stability of the addressed FCVNNs with time delay. Finally, two numerical simulations are posed to ensure that the attention of our main results are valuable. Copyright © 2016 Elsevier Ltd. All rights reserved.
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, T.
Depletion of the earth's natural resources is rapidly forcing society to choose values and life styles that will enable survival and well-being for all. Fundamental changes in institutions can be accomplished by developing the self-discipline to limit population and demands. These new values must be adopted: stewardship for growth; austerity for excessive consumption; permanence for profit; responsibilities for rights; people for professions; quality for quantity; sufficiency for affluence; localization for centralization; equity for urbanization; work for leisure; and jobs for machines. People must develop both the capacity for self-sufficiency and the ability to develop interdependent relationships. By studying successful, butmore » less consuming, countries, the U.S. can develop technologies that are fundamentally better and more responsive to scarcity. Evidence exists that smaller scales of operation are better. To accomplish this change, responsibility must be assumed by individuals, communities, governments, and all professional and industrial groups. (17 references) (DCK)« less
Thouand, Gérald; Durand, Marie-José; Maul, Armand; Gancet, Christian; Blok, Han
2011-01-01
The European REACH Regulation (Registration, Evaluation, Authorization of CHemical substances) implies, among other things, the evaluation of the biodegradability of chemical substances produced by industry. A large set of test methods is available including detailed information on the appropriate conditions for testing. However, the inoculum used for these tests constitutes a “black box.” If biodegradation is achievable from the growth of a small group of specific microbial species with the substance as the only carbon source, the result of the test depends largely on the cell density of this group at “time zero.” If these species are relatively rare in an inoculum that is normally used, the likelihood of inoculating a test with sufficient specific cells becomes a matter of probability. Normally this probability increases with total cell density and with the diversity of species in the inoculum. Furthermore the history of the inoculum, e.g., a possible pre-exposure to the test substance or similar substances will have a significant influence on the probability. A high probability can be expected for substances that are widely used and regularly released into the environment, whereas a low probability can be expected for new xenobiotic substances that have not yet been released into the environment. Be that as it may, once the inoculum sample contains sufficient specific degraders, the performance of the biodegradation will follow a typical S shaped growth curve which depends on the specific growth rate under laboratory conditions, the so called F/M ratio (ratio between food and biomass) and the more or less toxic recalcitrant, but possible, metabolites. Normally regulators require the evaluation of the growth curve using a simple approach such as half-time. Unfortunately probability and biodegradation half-time are very often confused. As the half-time values reflect laboratory conditions which are quite different from environmental conditions (after a substance is released), these values should not be used to quantify and predict environmental behavior. The probability value could be of much greater benefit for predictions under realistic conditions. The main issue in the evaluation of probability is that the result is not based on a single inoculum from an environmental sample, but on a variety of samples. These samples can be representative of regional or local areas, climate regions, water types, and history, e.g., pristine or polluted. The above concept has provided us with a new approach, namely “Probabio.” With this approach, persistence is not only regarded as a simple intrinsic property of a substance, but also as the capability of various environmental samples to degrade a substance under realistic exposure conditions and F/M ratio. PMID:21863143
Value of Adaptive Drought Forecasting and Management for the ACF River Basin in the Southeast U.S.
NASA Astrophysics Data System (ADS)
Georgakakos, A. P.; Kistenmacher, M.
2016-12-01
In recent times, severe droughts in the southeast U.S. occur every 6 to 10 years and last for up to 4 years. During such drought episodes, the ACF River Basin supplies decline by up to 50 % of their normal levels, and water stresses increase rather markedly, exacerbating stakeholder anxiety and conflicts. As part of the ACF Stakeholder planning process, GWRI has developed new tools and carried out comprehensive assessments to provide quantitative answers to several important questions related to drought prediction and management: (i) Can dry and wet climatic periods be reliably anticipated with sufficiently long lead times? What drought indices can support reliable, skillful, and long-lead forecasts? (ii) What management objectives can seasonal climate forecasts benefit? How should benefits/impacts be shared? (iii) What operational adjustments are likely to mitigate stakeholder impacts or increase benefits consistent with stakeholder expectations? Regarding drought prediction, a large number of indices were defined and tested at different basin locations and lag times. These included local/cumulative unimpaired flows (UIFs) at 10 river nodes; Mean Areal Precipitation (MAP); Standard Precipitation Index (SPI); Palmer Drought Severity Index; Palmer Modified Drought Index; Palmer Z-Index; Palmer Hydrologic Drought Severity Index; and Soil Moisture—GWRI watershed model. Our findings show that all ACF sub-basins exhibit good forecast skill throughout the year and with sufficient lead time. Index variables with high explanatory value include: previous UIFs, soil moisture states (generated by the GWRI watershed model), and PDSI. Regarding drought management, assessments with coupled forecast-management schemes demonstrate that the use of adaptive forecast-management procedures improves reservoir operations and meets basin demands more reliably. Such improvements can support better management of lake levels, higher environmental and navigation flows, higher dependable power generation hours, and better management of consumptive uses without adverse impacts on other stakeholder interests. However, realizing these improvements requires (1) usage of adaptive reservoir management procedures (incorporating forecasts), and (2) stakeholder agreement on equitable benefit sharing.
Spectral analysis of shielded gamma ray sources using precalculated library data
NASA Astrophysics Data System (ADS)
Holmes, Thomas Wesley; Gardner, Robin P.
2015-11-01
In this work, an approach has been developed for determining the intensity of a shielded source by first determining the thicknesses of three different shielding materials from a passively collected gamma-ray spectrum by making comparisons with predetermined shielded spectra. These evaluations are dependent on the accuracy and validity of the predetermined library spectra which were created by changing the thicknesses of the three chosen materials lead, aluminum and wood that are used to simulate any actual shielding. Each of the spectra produced was generated using MCNP5 with a sufficiently large number of histories to ensure a low relative error at each channel. The materials were held in the same respective order from source to detector, where each material consisted of three individual thicknesses and a null condition. This then produced two separate data sets of 27 total shielding material situations and subsequent predetermined libraries that were created for each radionuclide source used. The technique used to calculate the thicknesses of the materials implements a Levenberg-Marquardt nonlinear search that employs a tri-linear interpolation with the respective predetermined libraries within each channel for the supplied input unknown spectrum. Given that the nonlinear parameters require an initial guess for the calculations, the approach demonstrates first that when the correct values are input, the correct thicknesses are found. It then demonstrates that when multiple trials of random values are input for each of the nonlinear parameters, the average of the calculated solutions that successfully converges also produced the correct thicknesses. Under situations with sufficient information known about the detection situation at hand, the method was shown to behave in a manner that produces reasonable results and can serve as a good preliminary solution. This technique has the capability to be used in a variety of full spectrum inverse analysis problems including homeland security issues.
van den Berg, Joyce; Gordon, Bernardus B M; Snijders, Marcus P M L; Vandenbussche, Frank P H A; Coppus, Sjors F P J
2015-12-01
Early pregnancy failure (EPF) is a common complication of pregnancy. Surgical intervention carries a risk of complications and, therefore, medical treatment appears to be a safe alternative. Unfortunately, the current medical treatment with misoprostol alone has complete evacuation rates between 53% and 87%. Some reports suggest that sequential treatment with mifepristone and misoprostol leads to higher success rates than misoprostol alone. To evaluate the added value of mifepristone to current non-surgical treatment regimens in women with EPF we performed a systematic literature search. Electronic databases were searched: PubMed, Cochrane Library, Current Controlled Trials, and ClinicalTrials.gov. Clinical studies, both randomised and non-randomised trials, reporting on the added value of mifepristone to current non-surgical treatment regimens in women with EPF were included. Data of sixteen studies were extracted using a data extraction sheet (based on the Cochrane Consumers and Communication Review Group's data extraction template). The methodological quality was assessed using the Cochrane Collaboration Risk of Bias tool. In five randomised and eleven non-randomised trials, success rates of sequential treatment with mifepristone and misoprostol in case of EPF varied between 52% and 95%. Large heterogeneity existed in treatment regimens and comparators between studies. The existing evidence is insufficient to draw firm conclusions about the added value of mifepristone to misoprostol alone. A sufficiently powered randomised, double blinded placebo-controlled trial is urgently required to test whether, in EPF, the sequential combination of mifepristone with misoprostol is superior to misoprostol only. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
How the reference values for serum parathyroid hormone concentration are (or should be) established?
Souberbielle, J-C; Brazier, F; Piketty, M-L; Cormier, C; Minisola, S; Cavalier, E
2017-03-01
Well-validated reference values are necessary for a correct interpretation of a serum PTH concentration. Establishing PTH reference values needs recruiting a large reference population. Exclusion criteria for this population can be defined as any situation possibly inducing an increase or a decrease in PTH concentration. As recommended in the recent guidelines on the diagnosis and management of asymptomatic primary hyperparathyroidism, PTH reference values should be established in vitamin D-replete subjects with a normal renal function with possible stratification according to various factors such as age, gender, menopausal status, body mass index, and race. A consensus about analytical/pre-analytical aspects of PTH measurement is also needed with special emphasis on the nature of the sample (plasma or serum), the time and the fasting/non-fasting status of the blood sample. Our opinion is that blood sample for PTH measurement should be obtained in the morning after an overnight fast. Furthermore, despite longer stability of the PTH molecule in EDTA plasma, we prefer serum as it allows to measure calcium, a prerequisite for a correct interpretation of a PTH concentration, on the same sample. Once a consensus is reached, we believe an important international multicentre work should be performed to recruit a very extensive reference population of apparently healthy vitamin D-replete subjects with a normal renal function in order to establish the PTH normative data. Due to the huge inter-method variability in PTH measurement, a sufficient quantity of blood sample should be obtained to allow measurement with as many PTH kits as possible.
Quantitative framework for preferential flow initiation and partitioning
Nimmo, John R.
2016-01-01
A model for preferential flow in macropores is based on the short-range spatial distribution of soil matrix infiltrability. It uses elementary areas at two different scales. One is the traditional representative elementary area (REA), which includes a sufficient heterogeneity to typify larger areas, as for measuring field-scale infiltrability. The other, called an elementary matrix area (EMA), is smaller, but large enough to represent the local infiltrability of soil matrix material, between macropores. When water is applied to the land surface, each EMA absorbs water up to the rate of its matrix infiltrability. Excess water flows into a macropore, becoming preferential flow. The land surface then can be represented by a mesoscale (EMA-scale) distribution of matrix infiltrabilities. Total preferential flow at a given depth is the sum of contributions from all EMAs. Applying the model, one case study with multi-year field measurements of both preferential and diffuse fluxes at a specific depth was used to obtain parameter values by inverse calculation. The results quantify the preferential–diffuse partition of flow from individual storms that differed in rainfall amount, intensity, antecedent soil water, and other factors. Another case study provided measured values of matrix infiltrability to estimate parameter values for comparison and illustrative predictions. These examples give a self-consistent picture from the combination of parameter values, directions of sensitivities, and magnitudes of differences caused by different variables. One major practical use of this model is to calculate the dependence of preferential flow on climate-related factors, such as varying soil wetness and rainfall intensity.
Measuring and Validating the Levels of Brain-Derived Neurotrophic Factor in Human Serum
Naegelin, Yvonne; Dingsdale, Hayley; Säuberli, Katharina; Schädelin, Sabine; Kappos, Ludwig
2018-01-01
Brain-derived neurotrophic factor (BDNF) secreted by neurons is a significant component of synaptic plasticity. In humans, it is also present in blood platelets where it accumulates following its biosynthesis in megakaryocytes. BDNF levels are thus readily detectable in human serum and it has been abundantly speculated that they may somehow serve as an indicator of brain function. However, there is a great deal of uncertainty with regard to the range of BDNF levels that can be considered normal, how stable these values are over time and even whether BDNF levels can be reliably measured in serum. Using monoclonal antibodies and a sandwich ELISA, this study reports on BDNF levels in the serum of 259 volunteers with a mean value of 32.69 ± 8.33 ng/ml (SD). The mean value for the same cohort after 12 months was not significantly different (N = 226, 32.97 ± 8.36 ng/ml SD, p = 0.19). Power analysis of these values indicates that relatively large cohorts are necessary to identify significant differences, requiring a group size of 60 to detect a 20% change. The levels determined by ELISA could be validated by Western blot analyses using a BDNF monoclonal antibody. While no association was observed with gender, a weak, positive correlation was found with age. The overall conclusions are that BDNF levels can be reliably measured in human serum, that these levels are quite stable over one year, and that comparisons between two populations may only be meaningful if cohorts of sufficient sizes are assembled. PMID:29662942
Strong pathways for incorporation of terrestrially derived organic matter into benthic communities
NASA Astrophysics Data System (ADS)
McLeod, Rebecca J.; Wing, Stephen R.
2009-05-01
In Fiordland, New Zealand, large volumes of organic matter are deposited into the marine environment from pristine forested catchments. Analyses of δ15N, δ13C and δ34S were employed to determine whether these inputs were contributing to marine food webs via assimilation by common macroinvertebrates inhabiting the inner reaches of the fjords. Terrestrially derived organic matter (TOM) had values of δ15N, δ13C and δ34S that were distinct from other carbon source pools, providing sufficient power to quantify the contribution of TOM to the benthic food web. Isotopic values among macroinvertebrates varied significantly, with consistently low values of δ15N, δ13C and δ34S for the abundant deposit feeders Echinocardium cordatum (Echinodermata) and Pectinaria australis (Annelida), indicating assimilation of TOM. High concentrations of bacterial fatty acid biomarkers in E. cordatum, and values of δ13C of these biomarkers similar to TOM (-27 to -30‰) confirmed that TOM is indirectly assimilated by these sea urchins via heterotrophic bacteria. TOM was also found to enter the infaunal food web via chemoautotrophic bacteria that live symbiotically within Solemya parkinsonii (Bivalvia). Echinocardium cordatum, Pectinaria australis and S. parkinsonii comprised up to 33.5% of the biomass of the macroinfaunal community, and thus represent strong pathways for movement of organic matter from the forested catchments into the benthic food web. This demonstration of connectivity among adjacent marine and terrestrial habitats has important implications for coastal land management, and highlights the importance of intact coastal forests to marine ecosystem function.
Ohta, Y; Chiba, S; Imai, Y; Kamiya, Y; Arisawa, T; Kitagawa, A
2006-12-01
We examined whether ascorbic acid (AA) deficiency aggravates water immersion restraint stress (WIRS)-induced gastric mucosal lesions in genetically scorbutic ODS rats. ODS rats received scorbutic diet with either distilled water containing AA (1 g/l) or distilled water for 2 weeks. AA-deficient rats had 12% of gastric mucosal AA content in AA-sufficient rats. AA-deficient rats showed more severe gastric mucosal lesions than AA-sufficient rats at 1, 3 or 6 h after the onset of WIRS, although AA-deficient rats had a slight decrease in gastric mucosal AA content, while AA-sufficient rats had a large decrease in that content. AA-deficient rats had more decreased gastric mucosal nonprotein SH and vitamin E contents and increased gastric mucosal lipid peroxide content than AA-sufficient rats at 1, 3 or 6 h of WIRS. These results indicate that AA deficiency aggravates WIRS-induced gastric mucosal lesions in ODS rats by enhancing oxidative damage in the gastric mucosa.
14 CFR 23.773 - Pilot compartment view.
Code of Federal Regulations, 2010 CFR
2010-01-01
... side windows sufficiently large to provide the view specified in paragraph (a)(1) of this section... be shown that the windshield and side windows can be easily cleared by the pilot without interruption...
Energy-Dependent Ionization States of Shock-Accelerated Particles in the Solar Corona
NASA Technical Reports Server (NTRS)
Reames, Donald V.; Ng, C. K.; Tylka, A. J.
2000-01-01
We examine the range of possible energy dependence of the ionization states of ions that are shock-accelerated from the ambient plasma of the solar corona. If acceleration begins in a region of moderate density, sufficiently low in the corona, ions above about 0.1 MeV/amu approach an equilibrium charge state that depends primarily upon their speed and only weakly on the plasma temperature. We suggest that the large variations of the charge states with energy for ions such as Si and Fe observed in the 1997 November 6 event are consistent with stripping in moderately dense coronal. plasma during shock acceleration. In the large solar-particle events studied previously, acceleration occurs sufficiently high in the corona that even Fe ions up to 600 MeV/amu are not stripped of electrons.
Front propagation and clustering in the stochastic nonlocal Fisher equation
NASA Astrophysics Data System (ADS)
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Front propagation and clustering in the stochastic nonlocal Fisher equation.
Ganan, Yehuda A; Kessler, David A
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
System for producing a uniform rubble bed for in situ processes
Galloway, Terry R.
1983-01-01
A method and a cutter for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head (72) has a hollow body (76) with a generally circular base and sloping upper surface. A hollow shaft (74) extends from the hollow body (76). Cutter teeth (78) are mounted on the upper surface of the body (76) and relatively small holes (77) are formed in the body (76) between the cutter teeth (78). Relatively large peripheral flutes (80) around the body (76) allow material to drop below the drill head (72). A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale.
ERIC Educational Resources Information Center
Figgis, Jane; Alderson, Anna; Blackwell, Anna; Butorac, Anne; Mitchell, Keith; Zubrick, Ann
A study examined the feasibility of using case studies to convince enterprises to value training and learning. First, 10 Australian enterprises were studied in sufficient depth to construct a comprehensive picture of each enterprise, its culture, and the strategies it uses to develop the skills and knowledge of individual employees and the…
NASA Astrophysics Data System (ADS)
Lepore, Simone; Polkowski, Marcin; Grad, Marek
2018-02-01
The P-wave velocities (V p) within the East European Craton in Poland are well known through several seismic experiments which permitted to build a high-resolution 3D model down to 60 km depth. However, these seismic data do not provide sufficient information about the S-wave velocities (V s). For this reason, this paper presents the values of lithospheric V s and P-wave-to-S-wave velocity ratios (V p/V s) calculated from the ambient noise recorded during 2014 at "13 BB star" seismic array (13 stations, 78 midpoints) located in northern Poland. The 3D V p model in the area of the array consists of six sedimentary layers having total thickness within 3-7 km and V p in the range 1.85.3 km/s, a three-layer crystalline crust of total thickness 40 km and V p within 6.15-7.15 km/s, and the uppermost mantle, where V p is about 8.25 km/s. The V s and V p/V s values are calculated by the inversion of the surface-wave dispersion curves extracted from the noise cross correlation between all the station pairs. Due to the strong velocity differences among the layers, several modes are recognized in the 0.021 Hz frequency band: therefore, multimodal Monte Carlo inversions are applied. The calculated V s and V p/V s values in the sedimentary cover range within 0.992.66 km/s and 1.751.97 as expected. In the upper crust, the V s value (3.48 ± 0.10 km/s) is very low compared to the starting value of 3.75 ± 0.10 km/s. Consequently, the V p/V s value is very large (1.81 ± 0.03). To explain that the calculated values are compared with the ones for other old cratonic areas.
Scaling in two-fluid pinch-off
NASA Astrophysics Data System (ADS)
Pommer, Chris; Harris, Michael; Basaran, Osman
2010-11-01
The physics of two-fluid pinch-off, which arises whenever drops, bubbles, or jets of one fluid are ejected from a nozzle into another fluid, is scientifically important and technologically relevant. While the breakup of a drop in a passive environment is well understood, the physics of pinch-off when both the inner and outer fluids are dynamically active remains inadequately understood. Here, the breakup of a compound jet whose core and shell are incompressible Newtonian fluids is analyzed computationally when the interior is a "bubble" and the exterior is a liquid. The numerical method employed is an implicit method of lines ALE algorithm which uses finite elements with elliptic mesh generation and adaptive finite differences for time integration. Thus, the new approach neither starts with a priori idealizations, as has been the case with previous computations, nor is limited to length scales above that set by the wavelength of visible light as in any experimental study. In particular, three distinct responses are identified as the ratio m of the outer fluid's viscosity to the inner fluid's viscosity is varied. For small m, simulations show that the minimum neck radius r initially scales with time τ before breakup as r ˜0.58° (in accord with previous experiments and inviscid fluid models) but that r ˜τ once r becomes sufficiently small. For intermediate and large values of m, r ˜&αcirc;, where the exponent α may not equal one, once again as r becomes sufficiently small.
Code of Federal Regulations, 2013 CFR
2013-01-01
... economic planning activities to determine the viability of a potential value-added venture, and... and sufficient to evidence the viability of the venture. It may also contain background information... analysis by a qualified consultant of the economic, market, technical, financial, and management...
Code of Federal Regulations, 2014 CFR
2014-01-01
... economic planning activities to determine the viability of a potential value-added venture, and... and sufficient to evidence the viability of the venture. It may also contain background information... analysis by a qualified consultant of the economic, market, technical, financial, and management...
Code of Federal Regulations, 2012 CFR
2012-01-01
... economic planning activities to determine the viability of a potential value-added venture, and... and sufficient to evidence the viability of the venture. It may also contain background information... analysis by a qualified consultant of the economic, market, technical, financial, and management...
Taguchi, A; Asano, A; Ohtsuka, M; Nakamoto, T; Suei, Y; Tsuda, M; Kudo, Y; Inagaki, K; Noguchi, T; Tanimoto, K; Jacobs, R; Klemetti, E; White, S C; Horner, K
2008-07-01
Mandibular cortical erosion detected on dental panoramic radiographs (DPRs) may be useful for identifying women with osteoporosis, but little is known about the variation in diagnostic efficacy of observers worldwide. The purpose of this study was to measure the accuracy in identifying women at risk for osteoporosis in a worldwide group of observers using DPRs. We constructed a website that included background information about osteoporosis screening and instructions regarding the interpretation of mandibular cortical erosion. DPRs of 100 Japanese postmenopausal women aged 50 years or older who had completed skeletal bone mineral measurements by dual energy X-ray absorptiometry were digitized at 300 dpi. These were displayed on the website and used for the evaluation of diagnostic efficacy. Sixty observers aged 25 to 66 years recruited from 16 countries participated in this study. These observers classified cortical erosion into one of three groups (none, mild to moderate, and severe) on the website via the Internet, twice with an approximately 2-week interval. The diagnostic efficacy of the Osteoporosis Self-Assessment Tool (OST), a simple clinical decision rule based on age and weight, was also calculated and compared with that of cortical erosion. The overall mean sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the 60 observers in identifying women with osteoporosis by cortical erosion on DPRs were 82.5, 46.2, 46.7, and 84.0%, respectively. Those same values by the OST index were 82.9, 43.1, 43.9, and 82.4%, respectively. The intra-observer agreement in classifying cortical erosion on DPRs was sufficient (weighted kappa values>0.6) in 36 (60%) observers. This was significantly increased in observers who specialized in oral radiology (P<0.05). In the 36 observers with sufficient intra-observer agreement, the overall mean sensitivity, specificity, PPV, and NPV in identifying women with osteoporosis by any cortical erosion were 83.5, 48.7, 48.3, and 85.7%, respectively. The mean PPV and NPV were significantly higher in the 36 observers with sufficient intra-observer agreement than in the 24 observers with insufficient intra-observer agreement. Our results reconfirm the efficacy of cortical erosion findings in identifying postmenopausal women at risk for osteoporosis, among observers with sufficient intra-observer agreement. Information gathered from radiographic examination is at least as useful as that gathered from the OST index.
[Human nutrition with reference to animals as sources of protein (author's transl)].
de Wijn, J F
1981-03-01
In achieving adequate nutrition for all people in the world foods of animal origin are indispensable to supply sufficient protein and essential nutrients. All foods of animal origin have a number of characteristics in common, in view of which they should be regarded as highly valuable human food because of the considerable biological value of the proteins, their ready digestibility and their palatability. A number of nutritional features of animal versus vegetable protein are discussed. Several queries have to be placed against the health aspects of the copious consumption of animal protein as has increasingly become the practice in Europe. The consumption of dishes prepared from food of animal origin high in protein will inevitably be associated with a high fat content. It is not likely that, specifically, the incidence of human cancer will also be increased by the allegedly carcinogenic effects of meat persé, however using nitrite in meats may be hazardous when consumption of meat is considerable because of the carcinogenic effects of nitrosamines. In addition, there are drawbacks to the copious consumption of food of animal origin as part of the daily diet because of the high fat content and low dietary fibre content of this food. A conference of managers in the animal-food industry and experts from the professional medical and dietetic organizations would be a desirable improvement in achieving an optimum situation. Sufficient production and distribution will not fully ensure adequate nutrition of animal origin. Its valuable nutrients must be available from food which is acceptable to the individual consumer. Those factors which decide what is eaten and why, are not known to a sufficient extent. Cultural and environmental factors also play a highly decisive role in the matter. There are religious rules regarding food of animal origin, which obtain for large sections of the population all over the world. Other practices concerning the consumption of food of animal origin are also determined by the pressure of society regarding the attitude towards animals.
Zhang, Yun; Okubo, Ryuhi; Hirano, Mayumi; Eto, Yujiro; Hirano, Takuya
2015-01-01
Spatially separated entanglement is demonstrated by interfering two high-repetition squeezed pulse trains. The entanglement correlation of the quadrature amplitudes between individual pulses is interrogated. It is characterized in terms of the sufficient inseparability criterion with an optimum result of in the frequency domain and in the time domain. The quantum correlation is also observed when the two measurement stations are separated by a physical distance of 4.5 m, which is sufficiently large to demonstrate the space-like separation, after accounting for the measurement time. PMID:26278478
NASA Technical Reports Server (NTRS)
2008-01-01
When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.
NASA Astrophysics Data System (ADS)
Kirkil, Gokhan; Constantinescu, George
2009-06-01
Detailed knowledge of the dynamics of large-scale turbulence structures is needed to understand the geomorphodynamic processes around in-stream obstacles present in rivers. Detached Eddy Simulation is used to study the flow past a high-aspect-ratio rectangular cylinder (plate) mounted on a flat-bed relatively shallow channel at a channel Reynolds number of 2.4 × 105. Similar to other flows past surface-mounted bluff bodies, the large amplification of the turbulence inside the horseshoe vortex system is because the core of the main necklace vortex is subject to large-scale bimodal oscillations. The presence of a sharp edge at the flanks of the obstruction fixes the position of the flow separation at all depths and induces the formation and shedding of very strong wake rollers over the whole channel depth. Compared with the case of a circular cylinder where the intensity of the rollers decays significantly in the near-bed region because the incoming flow velocity is not sufficient to force the wake to transition from subcritical to supercritical regime, in the case of a high-aspect-ratio rectangular cylinder the passage of the rollers was found to induce high bed-shear stresses at large distances (6-8 D) behind the obstruction. Also, the nondimensional values of the pressure root-mean-square fluctuations at the bed were found to be about 1 order of magnitude higher than the ones predicted for circular cylinders. Overall, this shows that the shape of the in-stream obstruction can greatly modify the dynamics of the large-scale coherent structures, the nature of their interactions, and ultimately, their capability to entrain and transport sediment particles and the speed at which the scour process evolves during its initial stages.
NASA Astrophysics Data System (ADS)
Nagano, Hirohiko; Iwata, Hiroki
2017-03-01
Alaska wildfires may play an important role in nitrogen (N) dry deposition in Alaskan boreal forests. Here we used annual N dry deposition data measured by CASTNET at Denali National Park (DEN417) during 1999-2013, to evaluate the relationships between wildfire extent and N dry deposition in Alaska. We established six potential factors for multiple regression analysis, including burned area within 100 km of DEN417 (BA100km) and in other distant parts of Alaska (BAAK), the sum of indexes of North Atlantic Oscillation and Arctic Oscillation (OI), number of days with negative OI (OIday), precipitation (PRCP), and number of days with PRCP (PRCPday). Multiple regression analysis was conducted for both time scales, annual (using only annual values of factors) and six-month (using annual values of BAAK and BA100km, and fire and non-fire seasons' values of other four factors) time scales. Together, BAAK, BA100km, and OIday, along with PRCPday in the case of the six-month scale, explained more than 92% of the interannual variation in N dry deposition. The influence of BA100km on N dry deposition was ten-fold greater than from BAAK; the qualitative contribution was almost zero, however, due to the small BA100km. BAAK was the leading explanatory factor, with a 15 ± 14% contribution. We further calculated N dry deposition during 1950-2013 using the obtained regression equation and long-term records for the factors. The N dry deposition calculated for 1950-2013 revealed that an increased occurrence of wildfires during the 2000s led to the maximum N dry deposition exhibited during this decade. As a result, the effect of BAAK on N dry deposition remains sufficiently large, even when large possible uncertainties (>40%) in the measurement of N dry deposition are taken into account for the multiple regression analysis.
Waterloo Eye Study: data abstraction and population representation.
Machan, Carolyn M; Hrynchak, Patricia K; Irving, Elizabeth L
2011-05-01
To determine data quality in the Waterloo Eye Study (WatES) and compare the WatES age/sex distribution to the general population. Six thousand three hundred ninety-seven clinic files were reviewed at the University of Waterloo, School of Optometry. Abstracted information included patient age, sex, presenting chief complaint, entering spectacle prescription, refraction, binocular vision, and disease data. Mean age and age distributions were determined for the entire study group and both sexes. These results were compared with Statistics Canada (2006) estimates and information on Canadian optometric practices. Inter- and intraabstractor reliability was determined through double entry of 425 and 50 files, respectively; the Cohen kappa statistic (K) was calculated for qualitative data and the intraclass correlation coefficient (ICC) for quantitative data. Availability of data within the files was determined through missing data rates. The age of the patients in the WatES ranged from 0.2 to 93.9 years (mean age, 42.5 years), with all age groups younger than 85 years well represented. Females comprised 54.1% and males 45.9% of the study group. There were more older patients (>65 years) and younger patients (<10 years) than in the population at large. K values were highest for demographic information (e.g., sex, 0.96) and averaged slightly less for most clinical data requiring some abstractor interpretation (0.71 to 1.00). The two lowest interabstractor values, migraine (0.41) and smoking (0.26), had low reporting frequencies and definition ambiguity between abstractors. Intraclass correlation coefficient values were >0.90 for all but one continuous data type. Missing data rates were <2% for all but near phoria, which was 7.4%. The WatES database includes patients from all age groups and both sexes. It provides a fair representation of optometric patients in Canada. Its large sample size, good interabstractor repeatability, and low missing data rates demonstrates sufficient data quality for future analysis.
Effect of aggregate graining compositions on skid resistance of Exposed Aggregate Concrete pavement
NASA Astrophysics Data System (ADS)
Wasilewska, Marta; Gardziejczyk, Wladysław; Gierasimiuk, Pawel
2018-05-01
The paper presents the evaluation of skid resistance of EAC (Exposed Aggregate Concrete) pavements which differ in aggregate graining compositions. The tests were carried out on concrete mixes with a maximum aggregate size of 8 mm. Three types of coarse aggregates were selected depending on their resistance to polishing which was determined on the basis of the PSV (Polished Stone Value). Basalt (PSV 48), gabbro (PSV 50) and trachybasalt (PSV 52) aggregates were chosen. For each type of aggregate three graining compositions were designed, which differed in the content of coarse aggregate > 4mm. Their content for each series was as follows: A - 38%, B - 50% and C - 68%. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of wearing course under specified conditions simulating polishing processes. In addition, macrotexture measurements were made on the surface of each specimen using the Elatexure laser profile. Analysis of variance showed that at significance level α = 0.05, aggregate graining compositions as well as the PSV have a significant influence on the obtained values of the friction coefficient μm of the tested EAC pavements. The highest values of the μm have been obtained for EAC with the lowest amount of coarse aggregates (compositions A). In these cases the resistance to polishing of the aggregate does not significantly affect the friction coefficients. This is related to the large areas of cement mortar between the exposed coarse grains. Based on the analysis of microscope images, it was observed that the coarse aggregates were not sufficiently exposed. It has been proved that PSV significantly affected the coefficient of friction in the case of compositions B and C. This is caused by large areas of exposed coarse aggregate. The best parameters were achieved for the EAC pavements with graining composition B and C and trachybasalt aggregate.
Human papillomavirus DNA testing as an adjunct to cytology in cervical screening programs.
Lörincz, Attila T; Richart, Ralph M
2003-08-01
Our objective was to review current large studies of human papillomavirus (HPV) DNA testing as an adjunct to the Papanicolaou test for cervical cancer screening programs. We analyzed 10 large screening studies that used the Hybrid Capture 2 test and 3 studies that used the polymerase chain reaction test in a manner that enabled reliable estimates of accuracy for detecting or predicting high-grade cervical intraepithelial neoplasia (CIN). Most studies allowed comparison of HPV DNA and Papanicolaou testing and estimates of the performance of Papanicolaou and HPV DNA as combined tests. The studies were selected on the basis of a sufficient number of cases of high-grade CIN and cancer to provide meaningful statistical values. Investigators had to demonstrate the ability to generate reasonably reliable Hybrid Capture 2 or polymerase chain reaction data that were either minimally biased by nature of study design or that permitted analytical techniques for addressing issues of study bias to be applied. Studies had to provide data for the calculation of test sensitivity, specificity, predictive values, odds ratios, relative risks, confidence intervals, and other relevant measures. Final data were abstracted directly from published articles or estimated from descriptive statistics presented in the articles. In some studies, new analyses were performed from raw data supplied by the principal investigators. We concluded that HPV DNA testing was a more sensitive indicator for prevalent high-grade CIN than either conventional or liquid cytology. A combination of HPV DNA and Papanicolaou testing had almost 100% sensitivity and negative predictive value. The specificity of the combined tests was slightly lower than the specificity of the Papanicolaou test alone, but this decrease could potentially be offset by greater protection from neoplastic progression and cost savings available from extended screening intervals. One "double-negative" HPV DNA and Papanicolaou test indicated better prognostic assurance against risk of future CIN 3 than 3 subsequent negative conventional Papanicolaou tests and may safely allow 3-year screening intervals for such low-risk women.
Dumitrescu, Sorina
2009-01-01
In order to prove a key result for their development (Lemma 2), Taubman and Thie need the assumption that the upper boundary of the convex hull of the channel coding probability-redundancy characteristic is sufficiently dense. Since a floor value for the density level for which the claim to hold is not specified, it is not clear whether their lemma applies to practical situations. In this correspondence, we show that the constraint of sufficient density can be removed, and, thus, we validate the conclusion of the lemma for any scenario encountered in practice.
On the linearity of tracer bias around voids
NASA Astrophysics Data System (ADS)
Pollina, Giorgia; Hamaus, Nico; Dolag, Klaus; Weller, Jochen; Baldi, Marco; Moscardini, Lauro
2017-07-01
The large-scale structure of the Universe can be observed only via luminous tracers of the dark matter. However, the clustering statistics of tracers are biased and depend on various properties, such as their host-halo mass and assembly history. On very large scales, this tracer bias results in a constant offset in the clustering amplitude, known as linear bias. Towards smaller non-linear scales, this is no longer the case and tracer bias becomes a complicated function of scale and time. We focus on tracer bias centred on cosmic voids, I.e. depressions of the density field that spatially dominate the Universe. We consider three types of tracers: galaxies, galaxy clusters and active galactic nuclei, extracted from the hydrodynamical simulation Magneticum Pathfinder. In contrast to common clustering statistics that focus on auto-correlations of tracers, we find that void-tracer cross-correlations are successfully described by a linear bias relation. The tracer-density profile of voids can thus be related to their matter-density profile by a single number. We show that it coincides with the linear tracer bias extracted from the large-scale auto-correlation function and expectations from theory, if sufficiently large voids are considered. For smaller voids we observe a shift towards higher values. This has important consequences on cosmological parameter inference, as the problem of unknown tracer bias is alleviated up to a constant number. The smallest scales in existing data sets become accessible to simpler models, providing numerous modes of the density field that have been disregarded so far, but may help to further reduce statistical errors in constraining cosmology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Duane L; Pouquet, Dr. Annick; Mininni, Dr. Pablo D.
2015-01-01
We report results on rotating stratified turbulence in the absence of forcing, with large-scale isotropic initial conditions, using direct numerical simulations computed on grids of up tomore » $4096^3$ points. The Reynolds and Froude numbers are respectively equal to $$Re=5.4\\times 10^4$$ and $Fr=0.0242$$. The ratio of the Brunt-V\\"ais\\"al\\"a to the inertial wave frequency, $$N/f$, is taken to be equal to 5, a choice appropriate to model the dynamics of the southern abyssal ocean at mid latitudes. This gives a global buoyancy Reynolds number $$R_B=ReFr^2=32$$, a value sufficient for some isotropy to be recovered in the small scales beyond the Ozmidov scale, but still moderate enough that the intermediate scales where waves are prevalent are well resolved. We concentrate on the large-scale dynamics and confirm that the Froude number based on a typical vertical length scale is of order unity, with strong gradients in the vertical. Two characteristic scales emerge from this computation, and are identified from sharp variations in the spectral distribution of either total energy or helicity. A spectral break is also observed at a scale at which the partition of energy between the kinetic and potential modes changes abruptly, and beyond which a Kolmogorov-like spectrum recovers. Large slanted layers are ubiquitous in the flow in the velocity and temperature fields, and a large-scale enhancement of energy is also observed, directly attributable to the effect of rotation.« less
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Innovative contracting methods and construction traffic congestion.
DOT National Transportation Integrated Search
2012-01-01
Increasing travel demand and lack of sufficient highway capacity are serious problems in most : major metropolitan areas in the United States. Large metropolitan cities have been experiencing : increased traffic congestion problems over the past seve...
EVALUATION OF THE SCATS CONTROL SYSTEM
DOT National Transportation Integrated Search
2008-12-01
Increasing travel demand and lack of sufficient highway capacity are serious problems in most major metropolitan areas in the United States. Large metropolitan areas have been experiencing increased traffic congestion problems over the past several y...
NASA Astrophysics Data System (ADS)
Jones, Michael G.; Haynes, Martha P.; Giovanelli, Riccardo; Moorman, Crystal
2018-06-01
We present the most precise measurement of the z = 0 H I mass function (HIMF) to date based on the final catalogue of the ALFALFA (Arecibo Legacy Fast ALFA) blind H I survey of the nearby Universe. The Schechter function fit has a `knee' mass log (M_{*} h2_{70}/M_{⊙}) = 9.94 ± 0.01 ± 0.05, a low-mass slope parameter α = -1.25 ± 0.02 ± 0.1, and a normalization φ _{*} = (4.5 ± 0.2 ± 0.8) × 10^{-3} h3_{70} Mpc^{-3 dex^{-1}}, with both random and systematic uncertainties as quoted. Together these give an estimate of the H I content of the z = 0 Universe as Ω _{H I} = (3.9 ± 0.1 ± 0.6) × 10^{-4} h^{-1}_{70} (corrected for H I self-absorption). Our analysis of the uncertainties indicates that the `knee' mass is a cosmologically fair measurement of the z = 0 value, with its largest uncertainty originating from the absolute flux calibration, but that the low-mass slope is only representative of the local Universe. We also explore large-scale trends in α and M* across the ALFALFA volume. Unlike with the 40 per cent sample, there is now sufficient coverage in both of the survey fields to make an independent determination of the HIMF in each. We find a large discrepancy in the low-mass slope (Δα = 0.14 ± 0.03) between the two regions, and argue that this is likely caused by the presence of a deep void in one field and the Virgo cluster in the other. Furthermore, we find that the value of the `knee' mass within the Local Volume appears to be suppressed by 0.18 ± 0.04 dex compared to the global ALFALFA value, which explains the lower value measured by the shallower H I Parkes All Sky Survey (HIPASS). We discuss possible explanations and interpretations of these results and how they can be expanded on with future surveys.
Examination of snowmelt over Western Himalayas using remote sensing data
NASA Astrophysics Data System (ADS)
Tiwari, Sarita; Kar, Sarat C.; Bhatla, R.
2016-07-01
Snowmelt variability in the Western Himalayas has been examined using remotely sensed snow water equivalent (SWE) and snow-covered area (SCA) datasets. It is seen that climatological snowfall and snowmelt amount varies in the Himalayan region from west to east and from month to month. Maximum snowmelt occurs at the elevation zone between 4500 and 5000 m. As the spring and summer approach and snowmelt begins, a large amount of snow melts in May. Strength and weaknesses of temperature-based snowmelt models have been analyzed for this region by computing the snowmelt factor or the degree-day factor (DDF). It is seen that average DDF in the Himalayas is more in April and less in July. During spring and summer months, melting rate is higher in the areas that have height above 2500 m. The region that lies between 4500 and 5000 m elevation zones contributes toward more snowmelt with higher melting rate. Snowmelt models have been developed to estimate interannual variations of monthly snowmelt amount using the DDF, observed SWE, and surface air temperature from reanalysis datasets. In order to further improve the estimate snowmelt, regression between observed and modeled snowmelt has been carried out and revised DDF values have been computed. It is found that both the models do not capture the interannual variability of snowmelt in April. The skill of the model is moderate in May and June, but the skill is relatively better in July. In order to explain this skill, interannual variability (IAV) of surface air temperature has been examined. Compared to July, in April, the IAV of temperature is large indicating that a climatological value of DDF is not sufficient to explain the snowmelt rate in April. Snow area and snow amount depletion curves over Himalayas indicate that in a small area at high altitude, snow is still observed with large SWE whereas over most of the region, all the snow has melted.
Development and In Vitro Toxicity Evaluation of Alternative Sustainable Nanomaterials
Novel nanomaterial types are rapidly being developed for the value they may add to consumer products without sufficient evaluation of implications for human health, toxicity, environmental impact and long-term sustainability. Nanomaterials made of metals, semiconductors and vario...
Structure and biochemical functions of four simian virus 40 truncated large-T antigens.
Chaudry, F; Harvey, R; Smith, A E
1982-01-01
The structure of four abnormal T antigens which are present in different simian virus 40 (SV40)-transformed mouse cell lines was studied by tryptic peptide mapping, partial proteolysis fingerprinting, immunoprecipitation with monoclonal antibodies, and in vitro translation. The results obtained allowed us to deduce that these proteins, which have apparent molecular weights of 15,000, 22,000, 33,000 and 45,000, are truncated forms of large-T antigen extending to different amounts into the amino acid sequences unique to large-T. The proteins are all phosphorylated, probably at a site between amino acids 106 and 123. The mRNAs coding for the proteins probably contain the normal large-T splice but are shorter than the normal transcripts of the SV40 early region. The truncated large-Ts were tested for the ability to bind to double-stranded DNA-cellulose. This showed that the 33,000- and 45,000-molecular-weight polypeptides contained sequences sufficient for binding under the conditions used, whereas the 15,000- and 22,000-molecular-weight forms did not. Together with published data, this allows the tentative mapping of a region of SV40 large-T between amino acids 109 and 272 that is necessary and may be sufficient for the binding to double-stranded DNA-cellulose in vitro. None of the truncated large-T species formed a stable complex with the host cell protein referred to as nonviral T-antigen or p53, suggesting that the carboxy-terminal sequences of large-T are necessary for complex formation. Images PMID:6292504
CSHCN in Texas: meeting the need for specialist care.
Young, M Cherilyn; Drayton, Vonna L C; Menon, Ramdas; Walker, Lesa R; Parker, Colleen M; Cooper, Sam B; Bultman, Linda L
2005-06-01
Assuring the sufficiency and suitability of systems of care and services for children with special health care needs (CSHCN) presents a challenge to Texas providers, agencies, and state Title V programs. To meet the need for specialist care, referrals from primary care doctors are often necessary. The objective of this study was to describe the factors associated with the need for specialist care and problems associated with obtaining referrals in Texas. Bivariate and multivariate analyses were performed using the National Survey of Children with Special Health Care Needs (NS-CSHCN) weighted sample for Texas (n = 719,014) to identify variables associated with the need for specialist care and problems obtaining referrals for specialist care. Medical need of the CSHCN and sensitivity to family values/customs was associated with greater need for specialist care, and Hispanic ethnicity and lower maternal education were associated with less need. Medical need, amount of time spent with doctors and sensitivity to values/customs, living in a large metropolitan statistical area, and lack of medical information were associated with problems obtaining a specialist care referral. Findings revealed some similarities and differences with meeting the need for specialist care when comparing Texas results to other studies. In Texas, aspects of customer satisfaction variables, especially doctors' sensitivity to family values/customs and parents' not receiving enough information on medical problems, were significantly associated with problems obtaining specialist referrals. Findings indicate a need to further research relationships and communication among doctors, CSHCN, and their families.
Zero-Point Energy Leakage in Quantum Thermal Bath Molecular Dynamics Simulations.
Brieuc, Fabien; Bronstein, Yael; Dammak, Hichem; Depondt, Philippe; Finocchi, Fabio; Hayoun, Marc
2016-12-13
The quantum thermal bath (QTB) has been presented as an alternative to path-integral-based methods to introduce nuclear quantum effects in molecular dynamics simulations. The method has proved to be efficient, yielding accurate results for various systems. However, the QTB method is prone to zero-point energy leakage (ZPEL) in highly anharmonic systems. This is a well-known problem in methods based on classical trajectories where part of the energy of the high-frequency modes is transferred to the low-frequency modes leading to a wrong energy distribution. In some cases, the ZPEL can have dramatic consequences on the properties of the system. Thus, we investigate the ZPEL by testing the QTB method on selected systems with increasing complexity in order to study the conditions and the parameters that influence the leakage. We also analyze the consequences of the ZPEL on the structural and vibrational properties of the system. We find that the leakage is particularly dependent on the damping coefficient and that increasing its value can reduce and, in some cases, completely remove the ZPEL. When using sufficiently high values for the damping coefficient, the expected energy distribution among the vibrational modes is ensured. In this case, the QTB method gives very encouraging results. In particular, the structural properties are well-reproduced. The dynamical properties should be regarded with caution although valuable information can still be extracted from the vibrational spectrum, even for large values of the damping term.
Setaria viridis as a Model System to Advance Millet Genetics and Genomics
Huang, Pu; Shyu, Christine; Coelho, Carla P.; Cao, Yingying; Brutnell, Thomas P.
2016-01-01
Millet is a common name for a group of polyphyletic, small-seeded cereal crops that include pearl, finger and foxtail millet. Millet species are an important source of calories for many societies, often in developing countries. Compared to major cereal crops such as rice and maize, millets are generally better adapted to dry and hot environments. Despite their food security value, the genetic architecture of agronomically important traits in millets, including both morphological traits and climate resilience remains poorly studied. These complex traits have been challenging to dissect in large part because of the lack of sufficient genetic tools and resources. In this article, we review the phylogenetic relationship among various millet species and discuss the value of a genetic model system for millet research. We propose that a broader adoption of green foxtail (Setaria viridis) as a model system for millets could greatly accelerate the pace of gene discovery in the millets, and summarize available and emerging resources in S. viridis and its domesticated relative S. italica. These resources have value in forward genetics, reverse genetics and high throughput phenotyping. We describe methods and strategies to best utilize these resources to facilitate the genetic dissection of complex traits. We envision that coupling cutting-edge technologies and the use of S. viridis for gene discovery will accelerate genetic research in millets in general. This will enable strategies and provide opportunities to increase productivity, especially in the semi-arid tropics of Asia and Africa where millets are staple food crops. PMID:27965689
Setaria viridis as a Model System to Advance Millet Genetics and Genomics.
Huang, Pu; Shyu, Christine; Coelho, Carla P; Cao, Yingying; Brutnell, Thomas P
2016-01-01
Millet is a common name for a group of polyphyletic, small-seeded cereal crops that include pearl, finger and foxtail millet. Millet species are an important source of calories for many societies, often in developing countries. Compared to major cereal crops such as rice and maize, millets are generally better adapted to dry and hot environments. Despite their food security value, the genetic architecture of agronomically important traits in millets, including both morphological traits and climate resilience remains poorly studied. These complex traits have been challenging to dissect in large part because of the lack of sufficient genetic tools and resources. In this article, we review the phylogenetic relationship among various millet species and discuss the value of a genetic model system for millet research. We propose that a broader adoption of green foxtail ( Setaria viridis ) as a model system for millets could greatly accelerate the pace of gene discovery in the millets, and summarize available and emerging resources in S. viridis and its domesticated relative S. italica . These resources have value in forward genetics, reverse genetics and high throughput phenotyping. We describe methods and strategies to best utilize these resources to facilitate the genetic dissection of complex traits. We envision that coupling cutting-edge technologies and the use of S. viridis for gene discovery will accelerate genetic research in millets in general. This will enable strategies and provide opportunities to increase productivity, especially in the semi-arid tropics of Asia and Africa where millets are staple food crops.
NASA Astrophysics Data System (ADS)
Ando, Shin'ichiro; Sato, Katsuhiko
2003-10-01
Resonant spin-flavour (RSF) conversions of supernova neutrinos, which are induced by the interaction between the nonzero neutrino magnetic moment and supernova magnetic fields, are studied for both normal and inverted mass hierarchy. As the case for the pure matter-induced neutrino oscillation (Mikheyev–Smirnov–Wolfenstein (MSW) effect), we find that the RSF transitions are strongly dependent on the neutrino mass hierarchy as well as the value of θ13. Flavour conversions are solved numerically for various neutrino parameter sets, with the presupernova profile calculated by Woosley and Weaver. In particular, it is very interesting that the RSF-induced νe→bar nue transition occurs if the following conditions are all satisfied: the value of μνB (μν is the neutrino magnetic moment and B is the magnetic field strength) is sufficiently strong, the neutrino mass hierarchy is inverted, and the value of θ13 is large enough to induce adiabatic MSW resonance. In this case, the strong peak due to the original νe emitted from the neutronization burst would exist in the time profile of the neutrino events detected at the Super-Kamiokande detector. If this peak were observed in reality, it would provide fruitful information on the neutrino properties. On the other hand, the characteristics of the neutrino spectra are also different between the neutrino models, but we find that there remains degeneracy among several models. Dependence on presupernova models is also discussed.
Ochs, M.; Davis, J.A.; Olin, M.; Payne, T.E.; Tweed, C.J.; Askarieh, M.M.; Altmann, S.
2006-01-01
For the safe final disposal and/or long-term storage of radioactive wastes, deep or near-surface underground repositories are being considered world-wide. A central safety feature is the prevention, or sufficient retardation, of radionuclide (RN) migration to the biosphere. To this end, radionuclide sorption is one of the most important processes. Decreasing the uncertainty in radionuclide sorption may contribute significantly to reducing the overall uncertainty of a performance assessment (PA). For PA, sorption is typically characterised by distribution coefficients (Kd values). The conditional nature of Kd requires different estimates of this parameter for each set of geochemical conditions of potential relevance in a RN's migration pathway. As it is not feasible to measure sorption for every set of conditions, the derivation of Kd for PA must rely on data derived from representative model systems. As a result, uncertainty in Kd is largely caused by the need to derive values for conditions not explicitly addressed in experiments. The recently concluded NEA Sorption Project [1] showed that thermodynamic sorption models (TSMs) are uniquely suited to derive K d as a function of conditions, because they allow a direct coupling of sorption with variable solution chemistry and mineralogy in a thermodynamic framework. The results of the project enable assessment of the suitability of various TSM approaches for PA-relevant applications as well as of the potential and limitations of TSMs to model RN sorption in complex systems. ?? by Oldenbourg Wissenschaftsverlag.
Measurement of oxygen tension within mesenchymal stem cell spheroids.
Murphy, Kaitlin C; Hung, Ben P; Browne-Bourne, Stephen; Zhou, Dejie; Yeung, Jessica; Genetos, Damian C; Leach, J Kent
2017-02-01
Spheroids formed of mesenchymal stem cells (MSCs) exhibit increased cell survival and trophic factor secretion compared with dissociated MSCs, making them therapeutically advantageous for cell therapy. Presently, there is no consensus for the mechanism of action. Many hypothesize that spheroid formation potentiates cell function by generating a hypoxic core within spheroids of sufficiently large diameters. The purpose of this study was to experimentally determine whether a hypoxic core is generated in MSC spheroids by measuring oxygen tension in aggregates of increasing diameter and correlating oxygen tension values with cell function. MSC spheroids were formed with 15 000, 30 000 or 60 000 cells per spheroid, resulting in radii of 176 ± 8 µm, 251 ± 12 µm and 353 ± 18 µm, respectively. Oxygen tension values coupled with mathematical modelling revealed a gradient that varied less than 10% from the outer diameter within the largest spheroids. Despite the modest radial variance in oxygen tension, cellular metabolism from spheroids significantly decreased as the number of cells and resultant spheroid size increased. This may be due to adaptive reductions in matrix deposition and packing density with increases in spheroid diameter, enabling spheroids to avoid the formation of a hypoxic core. Overall, these data provide evidence that the enhanced function of MSC spheroids is not oxygen mediated. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Miatto, F. M.; Brougham, T.; Yao, A. M.
2012-07-01
We derive an analytical form of the Schmidt modes of spontaneous parametric down-conversion (SPDC) biphotons in both Cartesian and polar coordinates. We show that these correspond to Hermite-Gauss (HG) or Laguerre-Gauss (LG) modes only for a specific value of their width, and we show how such value depends on the experimental parameters. The Schmidt modes that we explicitly derive allow one to set up an optimised projection basis that maximises the mutual information gained from a joint measurement. The possibility of doing so with LG modes makes it possible to take advantage of the properties of orbital angular momentum eigenmodes. We derive a general entropic entanglement measure using the Rényi entropy as a function of the Schmidt number, K, and then retrieve the von Neumann entropy, S. Using the relation between S and K we show that, for highly entangled states, a non-ideal measurement basis does not degrade the number of shared bits by a large extent. More specifically, given a non-ideal measurement which corresponds to the loss of a fraction of the total number of modes, we can quantify the experimental parameters needed to generate an entangled SPDC state with a sufficiently high dimensionality to retain any given fraction of shared bits.
NASA Astrophysics Data System (ADS)
Sanchez del Rio, Manuel; Pareschi, Giovanni
2001-01-01
The x-ray reflectivity of a multilayer is a non-linear function of many parameters (materials, layer thicknesses, densities, roughness). Non-linear fitting of experimental data with simulations requires to use initial values sufficiently close to the optimum value. This is a difficult task when the space topology of the variables is highly structured, as in our case. The application of global optimization methods to fit multilayer reflectivity data is presented. Genetic algorithms are stochastic methods based on the model of natural evolution: the improvement of a population along successive generations. A complete set of initial parameters constitutes an individual. The population is a collection of individuals. Each generation is built from the parent generation by applying some operators (e.g. selection, crossover, mutation) on the members of the parent generation. The pressure of selection drives the population to include 'good' individuals. For large number of generations, the best individuals will approximate the optimum parameters. Some results on fitting experimental hard x-ray reflectivity data for Ni/C multilayers recorded at the ESRF BM5 are presented. This method could be also applied to the help in the design of multilayers optimized for a target application, like for an astronomical grazing-incidence hard X-ray telescopes.
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
Caggiano, Alessandra
2018-03-09
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features ( k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear ( VB max ) was achieved, with predicted values very close to the measured tool wear values.
2018-01-01
Machining of titanium alloys is characterised by extremely rapid tool wear due to the high cutting temperature and the strong adhesion at the tool-chip and tool-workpiece interface, caused by the low thermal conductivity and high chemical reactivity of Ti alloys. With the aim to monitor the tool conditions during dry turning of Ti-6Al-4V alloy, a machine learning procedure based on the acquisition and processing of cutting force, acoustic emission and vibration sensor signals during turning is implemented. A number of sensorial features are extracted from the acquired sensor signals in order to feed machine learning paradigms based on artificial neural networks. To reduce the large dimensionality of the sensorial features, an advanced feature extraction methodology based on Principal Component Analysis (PCA) is proposed. PCA allowed to identify a smaller number of features (k = 2 features), the principal component scores, obtained through linear projection of the original d features into a new space with reduced dimensionality k = 2, sufficient to describe the variance of the data. By feeding artificial neural networks with the PCA features, an accurate diagnosis of tool flank wear (VBmax) was achieved, with predicted values very close to the measured tool wear values. PMID:29522443
NASA Astrophysics Data System (ADS)
Fukuchi, Rina; Yamaguchi, Asuka; Yamamoto, Yuzuru; Ashi, Juichiro
2017-08-01
The paleothermal structure and tectonic evolution of an accretionary prism is basic information for understanding subduction zone seismogenesis. To evaluate the entire paleotemperature profile of the Integrated Ocean Drilling Program (IODP) Site C0002 located in the off-Kumano region of the Nankai Trough and penetrate the inner accretionary wedge down to 3058.5 m below the seafloor (mbsf), we performed a vitrinite reflectance analysis for cuttings and core samples during IODP expeditions 338 and 348: Nankai Trough seismogenic zone experiment. Although vitrinite reflectance values (Ro) tend to increase with depth, two reversals of these values suggested the existence of thrust fault zones with sufficient displacements to offset the paleothermal structure. The estimated maximum paleotemperatures are 42-70°C at 1200-1300 mbsf, 44-100°C at 1600-2400 mbsf, and 56-115°C at 2600-3000 mbsf, respectively. These temperatures roughly coincide with estimated modern temperatures; however, at a smaller scale, the reconstructed partial paleogeothermal gradient (˜60-150°C/km) recorded at the hanging- and footwall of the presumed thrust fault zone is higher than the modern geothermal gradient (˜30-40°C/km). This high paleogeothermal gradient was possibly obtained prior to subduction, reflecting the large heat flow of the young Philippine Sea Plate.
Self-sufficiency, free trade and safety.
Rautonen, Jukka
2010-01-01
The relationship between free trade, self-sufficiency and safety of blood and blood components has been a perennial discussion topic in the blood service community. Traditionally, national self-sufficiency has been perceived as the ultimate goal that would also maximize safety. However, very few countries are, or can be, truly self-sufficient when self-sufficiency is understood correctly to encompass the whole value chain from the blood donor to the finished product. This is most striking when plasma derived medicines are considered. Free trade of blood products, or competition, as such can have a negative or positive effect on blood safety. Further, free trade of equipment and reagents and several plasma medicines is actually necessary to meet the domestic demand for blood and blood derivatives in most countries. Opposing free trade due to dogmatic reasons is not in the best interest of any country and will be especially harmful for the developing world. Competition between blood services in the USA has been present for decades. The more than threefold differences in blood product prices between European blood services indicate that competition is long overdue in Europe, too. This competition should be welcomed but carefully and proactively regulated to avoid putting safe and secure blood supply at risk. Copyright 2009 The International Association for Biologicals. Published by Elsevier Ltd. All rights reserved.
Recovery of aluminum and other metal values from fly ash
McDowell, William J.; Seeley, Forest G.
1981-01-01
The invention described herein relates to a method for improving the acid leachability of aluminum and other metal values found in fly ash which comprises sintering the fly ash, prior to acid leaching, with a calcium sulfate-containing composition at a temperature at which the calcium sulfate is retained in said composition during sintering and for a time sufficient to quantitatively convert the aluminum in said fly ash into an acid-leachable form.
Numerical Grid Generation and Potential Airfoil Analysis and Design
1988-01-01
Gauss- Seidel , SOR and ADI iterative methods e JACOBI METHOD In the Jacobi method each new value of a function is computed entirely from old values...preceding iteration and adding the inhomogeneous (boundary condition) term. * GAUSS- SEIDEL METHOD When we compute I in a Jacobi method, we have already...Gauss- Seidel method. Sufficient condition for p convergence of the Gauss- Seidel method is diagonal-dominance of [A].9W e SUCESSIVE OVER-RELAXATION (SOR
Recovery of aluminum and other metal values from fly ash
McDowell, W.J.; Seeley, F.G.
1979-11-01
The invention relates to a method for improving the acid leachability of aluminum and other metal values found in fly ash which comprises sintering the fly ash, prior to acid leaching, with a calcium sulfate-containing composition at a temperature at which the calcium sulfate is retained in said composition during sintering and for a time sufficient to quantitatively convert the aluminum in said fly ash into an acid-leachable form.
ERIC Educational Resources Information Center
Colman, Rosalie Marson; And Others
The Connecticut Haitian American community has recently become large enough and sufficiently well established to develop programs to assist economic and educational development in the Republic of Haiti. Southern Connecticut became a destination for large numbers of Haitian emigrants and political refugees in the 1950s, in 1964, and again in 1971.…
Ethnic use of the Tonto: geographic extension of the recreation knowledge base
Denver Hospodarsky; Martha Lee
1995-01-01
The recreational use of the Tonto National Forest, Arizona was investigated by using data on ethnic and racial subgroups. The Tonto is a Class 1 urban proximate forest adjoining the large, culturally diverse population of the Phoenix. An on-site survey of 524 recreating groups found sufficiently large numbers of Anglos (n=425) and Hispanics (n=82) who participated in...
Is There a Maximum Size of Water Drops in Nature?
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2013-01-01
In nature, water drops can have a large variety of sizes and shapes. Small droplets with diameters of the order of 5 to 10 µm are present in fog and clouds. This is not sufficiently large for gravity to dominate their behavior. In contrast, raindrops typically have sizes of the order of 1 mm, with observed maximum sizes in nature of around 5 mm in…
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
Static versus dynamic sampling for data mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
John, G.H.; Langley, P.
1996-12-31
As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less
An analysis of the lithology to resistivity relationships using airborne EM and boreholes
NASA Astrophysics Data System (ADS)
Barfod, Adrian A. S.; Christiansen, Anders V.; Møller, Ingelise
2014-05-01
We present a study of the relationship between dense airborne SkyTEM resistivity data and sparse lithological borehole data. Understanding the geological structures of the subsurface is of great importance to hydrogeological surveys. Large scale geological information can be gathered directly from boreholes or indirectly from large geophysical surveys. Borehole data provides detailed lithological information only at the position of the borehole and, due to the sparse nature of boreholes, they rarely provide sufficient information needed for high-accuracy groundwater models. Airborne geophysical data, on the other hand, provide dense spatial coverage, but are only indirectly bearing information on lithology through the resistivity models. Hitherherto, the integration of the geophysical data into geological and hydrogeological models has been often subjective, largely un-documented and painstakingly manual. This project presents a detailed study of the relationships between resistivity data and lithological borehole data. The purpose is to objectively describe the relationships between lithology and geophysical parameters and to document these relationships. This project has focused on utilizing preexisting datasets from the Danish national borehole database (JUPITER) and national geophysical database (GERDA). The study presented here is from the Norsminde catchment area (208 sq. km), situated in the municipality of Odder, Denmark. The Norsminde area contains a total of 758 boreholes and 106,770 SkyTEM soundings. The large amounts of data make the Norsminde area ideal for studying the relationship between geophysical data and lithological data. The subsurface is discretized into 20 cm horizontal sampling intervals from the highest elevation point to the depth of the deepest borehole. For each of these intervals a resistivity value is calculated at the position of the boreholes using a kriging formulation. The lithology data from the boreholes are then used to categorize the interpolated resistivity values according to lithology. The end result of this comparison is resistivity distributions for different lithology categories. The distributions provide detailed objective information of the resistivity properties of the subsurface and are a documentation of the resistivity imaging of the geological lithologies. We show that different lithologies are mapped at distinctively different resistivities but also that the geophysical inversion strategies influences the resulting distributions significantly.
The character of scaling earthquake source spectra for Kamchatka in the 3.5-6.5 magnitude range
NASA Astrophysics Data System (ADS)
Gusev, A. A.; Guseva, E. M.
2017-02-01
The properties of the source spectra of local shallow-focus earthquakes on Kamchatka in the range of magnitudes M w = 3.5-6.5 are studied using 460 records of S-waves obtained at the PET station. The family of average source spectra is constructed; the spectra are used to study the relationship between M w and the key quasi-dimensionless source parameters: stress drop Δσ and apparent stress σa. It is found that the parameter Δσ is almost stable, while σa grows steadily as the magnitude M w increases, indicating that the similarity is violated. It is known that at sufficiently large M w the similarity hypothesis is approximately valid: both parameters Δσ and σa do not show any noticeable magnitude dependence. It has been established that M w ≈ 5.7 is the threshold value of the magnitude when the change in regimes described occurs for the conditions on Kamchatka.
Gaussian mixed model in support of semiglobal matching leveraged by ground control points
NASA Astrophysics Data System (ADS)
Ma, Hao; Zheng, Shunyi; Li, Chang; Li, Yingsong; Gui, Li
2017-04-01
Semiglobal matching (SGM) has been widely applied in large aerial images because of its good tradeoff between complexity and robustness. The concept of ground control points (GCPs) is adopted to make SGM more robust. We model the effect of GCPs as two data terms for stereo matching between high-resolution aerial epipolar images in an iterative scheme. One term based on GCPs is formulated by Gaussian mixture model, which strengths the relation between GCPs and the pixels to be estimated and encodes some degree of consistency between them with respect to disparity values. Another term depends on pixel-wise confidence, and we further design a confidence updating equation based on three rules. With this confidence-based term, the assignment of disparity can be heuristically selected among disparity search ranges during the iteration process. Several iterations are sufficient to bring out satisfactory results according to our experiments. Experimental results validate that the proposed method outperforms surface reconstruction, which is a representative variant of SGM and behaves excellently on aerial images.
Solar coronal loop heating by cross-field wave transport
NASA Technical Reports Server (NTRS)
Amendt, Peter; Benford, Gregory
1989-01-01
Solar coronal arches heated by turbulent ion-cyclotron waves may suffer significant cross-field transport by these waves. Nonlinear processes fix the wave-propagation speed at about a tenth of the ion thermal velocity, which seems sufficient to spread heat from a central core into a large cool surrounding cocoon. Waves heat cocoon ions both through classical ion-electron collisions and by turbulent stochastic ion motions. Plausible cocoon sizes set by wave damping are in roughly kilometers, although the wave-emitting core may be only 100 m wide. Detailed study of nonlinear stabilization and energy-deposition rates predicts that nearby regions can heat to values intermediate between the roughly electron volt foot-point temperatures and the about 100 eV core, which is heated by anomalous Ohmic losses. A volume of 100 times the core volume may be affected. This qualitative result may solve a persistent problem with current-driven coronal heating; that it affects only small volumes and provides no way to produce the extended warm structures perceptible to existing instruments.
Kinetics of transient electroluminescence in organic light emitting diodes
NASA Astrophysics Data System (ADS)
Shukla, Manju; Kumar, Pankaj; Chand, Suresh; Brahme, Nameeta; Kher, R. S.; Khokhar, M. S. K.
2008-08-01
Mathematical simulation on the rise and decay kinetics of transient electroluminescence (EL) in organic light emitting diodes (OLEDs) is presented. The transient EL is studied with respect to a step voltage pulse. While rising, for lower values of time, the EL intensity shows a quadratic dependence on (t - tdel), where tdel is the time delay observed in the onset of EL, and finally attains saturation at a sufficiently large time. When the applied voltage is switched off, the initial EL decay shows an exponential dependence on (t - tdec), where tdec is the time when the voltage is switched off. The simulated results are compared with the transient EL performance of a bilayer OLED based on small molecular bis(2-methyl 8-hydroxyquinoline)(triphenyl siloxy) aluminium (SAlq). Transient EL studies have been carried out at different voltage pulse amplitudes. The simulated results show good agreement with experimental data. Using these simulated results the lifetime of the excitons in SAlq has also been calculated.
Brownian motion of non-wetting droplets held on a flat solid by gravity
NASA Astrophysics Data System (ADS)
Pomeau, Yves
2013-12-01
At equilibrium a small liquid droplet standing on a solid (dry) horizontal surface it does not wet rests on this surface on a small disc. As predicted and observed if such a droplet is in a low-viscosity vapor the main source of drag for a motion along the surface is the viscous dissipation in the liquid near the disc of contact. This dissipation is minimized by a Huygens-like motion coupling rolling and translation in such a way that the fluid near the disc of contact is almost motionless with respect to the solid. Because of this reduced drag and the associated large mobility the coefficient of Brownian diffusion is much larger than its standard Stokes-Enstein value. This is correct if the weight of the droplet is sufficient to keep it on the solid, instead of being lifted by thermal noise. The coupling between translation along the surface and rotation could be measured by correlated random angular deviations and horizontal displacement in this Brownian motion.
Erol, Özge Ö; Erdoğan, Behice Y; Onar, Atiye N
2017-03-01
Simultaneous determination of nitrate and nitrite in gunshot residue has been conducted by capillary electrophoresis using an acidic run buffer (pH 3.5). In previously developed capillary electrophoretic methods, alkaline pH separation buffers were used where nitrite and nitrate possess similar electrophoretic mobility. In this study, the electroosmotic flow has been reversed by using low pH running buffer without any additives. As a result of reversing the electroosmotic flow, very fast analysis has been actualized, well-defined and separated ion peaks emerge in less than 4 min. Besides, the limit of detection was improved by employing large volume sample stacking. Limit of detection values were 6.7 and 4.3 μM for nitrate and nitrite, respectively. In traditional procedure, mechanical agitation is employed for extraction, while in this work the extraction efficiency of ultrasound mixing for 30 min was found sufficient. The proposed method was successfully applied to authentic gunshot residue samples. © 2016 American Academy of Forensic Sciences.
Personality and gender differences in global perspective.
Schmitt, David P; Long, Audrey E; McPhearson, Allante; O'Brien, Kirby; Remmert, Brooke; Shah, Seema H
2017-12-01
Men's and women's personalities appear to differ in several respects. Social role theories of development assume gender differences result primarily from perceived gender roles, gender socialization and sociostructural power differentials. As a consequence, social role theorists expect gender differences in personality to be smaller in cultures with more gender egalitarianism. Several large cross-cultural studies have generated sufficient data for evaluating these global personality predictions. Empirically, evidence suggests gender differences in most aspects of personality-Big Five traits, Dark Triad traits, self-esteem, subjective well-being, depression and values-are conspicuously larger in cultures with more egalitarian gender roles, gender socialization and sociopolitical gender equity. Similar patterns are evident when examining objectively measured attributes such as tested cognitive abilities and physical traits such as height and blood pressure. Social role theory appears inadequate for explaining some of the observed cultural variations in men's and women's personalities. Evolutionary theories regarding ecologically-evoked gender differences are described that may prove more useful in explaining global variation in human personality. © 2016 International Union of Psychological Science.
Bootstrapping the (A1, A2) Argyres-Douglas theory
NASA Astrophysics Data System (ADS)
Cornagliotto, Martina; Lemos, Madalena; Liendo, Pedro
2018-03-01
We apply bootstrap techniques in order to constrain the CFT data of the ( A 1 , A 2) Argyres-Douglas theory, which is arguably the simplest of the Argyres-Douglas models. We study the four-point function of its single Coulomb branch chiral ring generator and put numerical bounds on the low-lying spectrum of the theory. Of particular interest is an infinite family of semi-short multiplets labeled by the spin ℓ. Although the conformal dimensions of these multiplets are protected, their three-point functions are not. Using the numerical bootstrap we impose rigorous upper and lower bounds on their values for spins up to ℓ = 20. Through a recently obtained inversion formula, we also estimate them for sufficiently large ℓ, and the comparison of both approaches shows consistent results. We also give a rigorous numerical range for the OPE coefficient of the next operator in the chiral ring, and estimates for the dimension of the first R-symmetry neutral non-protected multiplet for small spin.
Arsenic Incorporation Into Authigenic Pyrite, Bengal Basin Sediment, Bangladesh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lowers, H.A.; Breit, G.N.; Foster, A.L.
2007-07-10
Sediment from two deep boreholes ({approx}400 m) approximately 90 km apart in southern Bangladesh was analyzed by X-ray absorption spectroscopy (XAS), total chemical analyses, chemical extractions, and electron probe microanalysis to establish the importance of authigenic pyrite as a sink for arsenic in the Bengal Basin. Authigenic framboidal and massive pyrite (median values 1500 and 3200 ppm As, respectively), is the principal arsenic residence in sediment from both boreholes. Although pyrite is dominant, ferric oxyhydroxides and secondary iron phases contain a large fraction of the sediment-bound arsenic between approximately 20 and 100 m, which is the depth range of wellsmore » containing the greatest amount of dissolved arsenic. The lack of pyrite in this interval is attributed to rapid sediment deposition and a low sulfur flux from riverine and atmospheric sources. The ability of deeper aquifers (>150 m) to produce ground water with low dissolved arsenic in southern Bangladesh reflects adequate sulfur supplies and sufficient time to redistribute the arsenic into pyrite during diagenesis.« less
Arsenic incorporation into authigenic pyrite, Bengal Basin sediment, Bangladesh
Lowers, H.A.; Breit, G.N.; Foster, A.L.; Whitney, J.; Yount, J.; Uddin, Md. N.; Muneem, Ad. A.
2007-01-01
Sediment from two deep boreholes (???400 m) approximately 90 km apart in southern Bangladesh was analyzed by X-ray absorption spectroscopy (XAS), total chemical analyses, chemical extractions, and electron probe microanalysis to establish the importance of authigenic pyrite as a sink for arsenic in the Bengal Basin. Authigenic framboidal and massive pyrite (median values 1500 and 3200 ppm As, respectively), is the principal arsenic residence in sediment from both boreholes. Although pyrite is dominant, ferric oxyhydroxides and secondary iron phases contain a large fraction of the sediment-bound arsenic between approximately 20 and 100 m, which is the depth range of wells containing the greatest amount of dissolved arsenic. The lack of pyrite in this interval is attributed to rapid sediment deposition and a low sulfur flux from riverine and atmospheric sources. The ability of deeper aquifers (>150 m) to produce ground water with low dissolved arsenic in southern Bangladesh reflects adequate sulfur supplies and sufficient time to redistribute the arsenic into pyrite during diagenesis.
Symmetry Breaking and Restoration in the Ginzburg-Landau Model of Nematic Liquid Crystals
NASA Astrophysics Data System (ADS)
Clerc, Marcel G.; Kowalczyk, Michał; Smyrnelis, Panayotis
2018-06-01
In this paper we study qualitative properties of global minimizers of the Ginzburg-Landau energy which describes light-matter interaction in the theory of nematic liquid crystals near the Fréedericksz transition. This model depends on two parameters: ɛ >0 which is small and represents the coherence scale of the system and a≥0 which represents the intensity of the applied laser light. In particular, we are interested in the phenomenon of symmetry breaking as a and ɛ vary. We show that when a=0 the global minimizer is radially symmetric and unique and that its symmetry is instantly broken as a>0 and then restored for sufficiently large values of a. Symmetry breaking is associated with the presence of a new type of topological defect which we named the shadow vortex. The symmetry breaking scenario is a rigorous confirmation of experimental and numerical results obtained earlier in Barboza et al. (Phys Rev E 93(5):050201, 2016).
Contribution of Surface Thermal Forcing to Mixing in the Ocean
NASA Astrophysics Data System (ADS)
Wang, Fei; Huang, Shi-Di; Xia, Ke-Qing
2018-02-01
A critical ingredient of the meridional overturning circulation (MOC) is vertical mixing, which causes dense waters in the deep sea to rise throughout the stratified interior to the upper ocean. Here, we report a laboratory study aimed at understanding the contributions from surface thermal forcing (STF) to this mixing process. Our study reveals that the ratio of the thermocline thickness to the fluid depth largely determines the mixing rate and the mixing efficiency in an overturning flow driven by STF. By applying this finding to a hypothetical MOC driven purely by STF, we obtain a mixing rate of O(10-6 m2/s) and a corresponding meridional heat flux of O(10-2 petawatt, PW), which are far smaller than the values found for real oceans. These results provide quantitative support for the notion that STF alone is not sufficient to drive the MOC, which essentially acts as a heat conveyor belt powered by other energy sources.