Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less
Infrared length scale and extrapolations for the no-core shell model
NASA Astrophysics Data System (ADS)
Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.
2015-06-01
We precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A -body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3 (A -1 ) -dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound states of 4He,6He,6Li , and 7Li . We also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil S.
2006-01-01
A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.
Choi, Yoonseuk; Rosenblatt, Charles
2010-05-01
A herringbone "easy axis" pattern is scribed into a polyimide alignment layer for liquid-crystal orientation using the stylus of an atomic force microscope. Owing to the liquid crystal's bend elasticity K33 , the nematic director is unable to follow the sharp turn in the scribed easy axis, but instead relaxes over an extrapolation length L=K33/W2φ, where W2φ is the quadratic azimuthal anchoring strength coefficient. By immersing a tapered optical fiber into the liquid crystal, illuminating the fiber with polarized light, and scanning the fiber close to the substrate, a visualization and direct measurement of L are obtained on approaching the nematic-smectic- A phase transition temperature T NA from above. L is found to exhibit a sharp pretransitional increase near T NA, consistent with a diverging bend elastic constant. PMID:20866248
Richardson Extrapolation using DNAD
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Sezer, Hayri; Pakalapati, Suryanarayana R.; WVU-CFD Team
2013-11-01
Dual Number Automatic Derivation (DNAD) is a technique whereby a computer code can be executed with additional variable declarations to extend real number to a two dimensional space which is then used to evaluate derivatives to machine accuracy. In the literature this technique is usually applied to study sensitivities of calculations to model parameters, but not the mesh size. The current study explores possibilities of using the same technique to evaluate the derivative of the numerical solution with respect to mesh size which in turn can be used in the Taylor series expansion of the discretization error to calculate the error itself by way of extrapolation. Thus the new method enables explicit Richardson extrapolation by using only one set of calculations on a single grid. The extrapolation can be improved if an additional set of calculations are performed on a finer or a coarser mesh. The concept is demonstrated using one-dimensional example problems. Possible extension to multi-dimensions is discussed.
Biosimilars: Extrapolation for oncology.
Curigliano, Giuseppe; O'Connor, Darran P; Rosenberg, Julie A; Jacobs, Ira
2016-08-01
A biosimilar is a biologic that is highly similar to a licensed biologic (the reference product) in terms of purity, safety and efficacy. If the reference product is licensed to treat multiple therapeutic indications, extrapolation of indications, i.e., approval of a biosimilar for use in an indication held by the reference product but not directly studied in a comparative clinical trial with the biosimilar, may be possible but has to be scientifically justified. Here, we describe the data required to establish biosimilarity and emphasize that indication extrapolation is based on scientific principles and known mechanism of action. PMID:27354233
Ecotoxicological effects extrapolation models
Suter, G.W. II
1996-09-01
One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.
Evidence, eminence and extrapolation.
Hlavin, Gerald; Koenig, Franz; Male, Christoph; Posch, Martin; Bauer, Peter
2016-06-15
A full independent drug development programme to demonstrate efficacy may not be ethical and/or feasible in small populations such as paediatric populations or orphan indications. Different levels of extrapolation from a larger population to smaller target populations are widely used for supporting decisions in this situation. There are guidance documents in drug regulation, where a weakening of the statistical rigour for trials in the target population is mentioned to be an option for dealing with this problem. To this end, we propose clinical trials designs, which make use of prior knowledge on efficacy for inference. We formulate a framework based on prior beliefs in order to investigate when the significance level for the test of the primary endpoint in confirmatory trials can be relaxed (and thus the sample size can be reduced) in the target population while controlling a certain posterior belief in effectiveness after rejection of the null hypothesis in the corresponding confirmatory statistical test. We show that point-priors may be used in the argumentation because under certain constraints, they have favourable limiting properties among other types of priors. The crucial quantity to be elicited is the prior belief in the possibility of extrapolation from a larger population to the target population. We try to illustrate an existing decision tree for extrapolation to paediatric populations within our framework. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:26753552
NASA Astrophysics Data System (ADS)
Van Zandt, James R.
2012-05-01
Steady-state performance of a tracking filter is traditionally evaluated immediately after a track update. However, there is commonly a further delay (e.g., processing and communications latency) before the tracks can actually be used. We analyze the accuracy of extrapolated target tracks for four tracking filters: Kalman filter with the Singer maneuver model and worst-case correlation time, with piecewise constant white acceleration, and with continuous white acceleration, and the reduced state filter proposed by Mookerjee and Reifler.1, 2 Performance evaluation of a tracking filter is significantly simplified by appropriate normalization. For the Kalman filter with the Singer maneuver model, the steady-state RMS error immediately after an update depends on only two dimensionless parameters.3 By assuming a worst case value of target acceleration correlation time, we reduce this to a single parameter without significantly changing the filter performance (within a few percent for air tracking).4 With this simplification, we find for all four filters that the RMS errors for the extrapolated state are functions of only two dimensionless parameters. We provide simple analytic approximations in each case.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
Infrared extrapolations for atomic nuclei
Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; Wendt, Kyle A.
2015-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertaintymore » quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.« less
Infrared extrapolations for atomic nuclei
Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; Wendt, Kyle A.
2015-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.
Extrapolation limitations of multilayer feedforward neural networks
NASA Technical Reports Server (NTRS)
Haley, Pamela J.; Soloway, Donald
1992-01-01
The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.
Ethanol kinetics: extent of error in back extrapolation procedures.
al-Lanqawi, Y; Moreland, T A; McEwen, J; Halliday, F; Durnin, C J; Stevenson, I H
1992-01-01
1. Plasma ethanol concentrations were measured in 24 male volunteers for 9 h after a single oral dose of 710 mg kg-1. 2. The rate of decline of the plasma ethanol concentration (k0; mean +/- s.d.), was 186 +/- 26 mg l-1 h-1. 3. In each individual, three elimination rates were used to back-extrapolate plasma ethanol concentrations over 3 and 5 h periods from observed values at 4 h and 6 h post-dosing assuming zero-order kinetics. The extrapolated values were then compared with the observed concentrations. 4. Using the mean k0 values for the subjects the mean error in back extrapolation was small but highly variable. The variability in the error increased with the length of the extrapolation period. 5. When a k0 value of 150 mg l-1 h-1 (a value often cited as a population mean) was used for back extrapolation this resulted in significant under-estimation of actual values whereas the use of a k0 value of 238 mg l-1 h-1 (the highest value observed in the present study) resulted in significant over-estimation of actual values. 6. These results indicate that because the kinetics of ethanol are associated with substantial inter-subject variability the use of a single slope value to back calculate blood concentrations can give rise to considerable error. PMID:1457265
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
Builtin vs. auxiliary detection of extrapolation risk.
Munson, Miles Arthur; Kegelmeyer, W. Philip,
2013-02-01
A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.
Implicit Extrapolation Methods for Variable Coefficient Problems
NASA Technical Reports Server (NTRS)
Jung, M.; Ruede, U.
1996-01-01
Implicit extrapolation methods for the solution of partial differential equations are based on applying the extrapolation principle indirectly. Multigrid tau-extrapolation is a special case of this idea. In the context of multilevel finite element methods, an algorithm of this type can be used to raise the approximation order, even when the meshes are nonuniform or locally refined. Here previous results are generalized to the variable coefficient case and thus become applicable for nonlinear problems. The implicit extrapolation multigrid algorithm converges to the solution of a higher order finite element system. This is obtained without explicitly constructing higher order stiffness matrices but by applying extrapolation in a natural form within the algorithm. The algorithm requires only a small change of a basic low order multigrid method.
Cosmogony as an extrapolation of magnetospheric research
NASA Technical Reports Server (NTRS)
Alfven, H.
1984-01-01
A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Essentially nonoscillatory (ENO) reconstructions via extrapolation
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Jorgenson, Philip C. E.
1995-01-01
In this paper, the algorithm for determining the stencil of a one-dimensional Essentially Nonoscillatory (ENO) reconstruction scheme on a uniform grid is reinterpreted as being based on extrapolation. This view leads to another extension of ENO reconstruction schemes to two-dimensional unstructured triangular meshes. The key idea here is to select several cells of the stencil in one step based on extrapolation rather than one cell at a time. Numerical experiments confirm that the new scheme yields sharp nonoscillatory reconstructions and that it is about five times faster than previous schemes.
Extrapolated implicit-explicit time stepping.
Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.
2010-01-01
This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.
Chiral extrapolation of SU(3) amplitudes
Ecker, Gerhard
2011-05-23
Approximations of chiral SU(3) amplitudes at NNLO are proposed to facilitate the extrapolation of lattice data to the physical meson masses. Inclusion of NNLO terms is essential for investigating convergence properties of chiral SU(3) and for determining low-energy constants in a controllable fashion. The approximations are tested with recent lattice data for the ratio of decay constants F{sub K}/F{sub {pi}}.
Extrapolation discontinuous Galerkin method for ultraparabolic equations
NASA Astrophysics Data System (ADS)
Marcozzi, Michael D.
2009-02-01
Ultraparabolic equations arise from the characterization of the performance index of stochastic optimal control relative to ultradiffusion processes; they evidence multiple temporal variables and may be regarded as parabolic along characteristic directions. We consider theoretical and approximation aspects of a temporally order and step size adaptive extrapolation discontinuous Galerkin method coupled with a spatial Lagrange second-order finite element approximation for a prototype ultraparabolic problem. As an application, we value a so-called Asian option from mathematical finance.
Extrapolating phosphorus production to estimate resource reserves.
Vaccari, David A; Strigul, Nikolay
2011-08-01
Various indicators of resource scarcity and methods for extrapolating resource availability are examined for phosphorus. These include resource lifetime, and trends in resource price, ore grade and discovery rates, and Hubbert curve extrapolation. Several of these indicate increasing scarcity of phosphate resources. Calculated resource lifetime is subject to a number of caveats such as unanticipated future changes in resource discovery, mining and beneficiation technology, population growth or per-capita demand. Thus it should be used only as a rough planning index or as a relative indicator of potential scarcity. This paper examines the uncertainty in one method for estimating available resources from historical production data. The confidence intervals for the parameters and predictions of the Hubbert curves are computed as they relate to the amount of information available. These show that Hubbert-type extrapolations are not robust for predicting the ultimately recoverable reserves or year of peak production of phosphate rock. Previous successes of the Hubbert curve are for cases in which there exist alternative resources, which is not the situation for phosphate. It is suggested that data other than historical production, such as population growth, identified resources and economic factors, should be included in making such forecasts. PMID:21440285
Surface dose measurement using TLD powder extrapolation
Rapley, P. . E-mail: rapleyp@tbh.net
2006-10-01
Surface/near-surface dose measurements in therapeutic x-ray beams are important in determining the dose to the dermal and epidermal skin layers during radiation treatment. Accurate determination of the surface dose is a difficult but important task for proper treatment of patients. A new method of measuring surface dose in phantom through extrapolation of readings from various thicknesses of thermoluminescent dosimeter (TLD) powder has been developed and investigated. A device was designed, built, and tested that provides TLD powder thickness variation to a minimum thickness of 0.125 mm. Variations of the technique have been evaluated to optimize precision with consideration of procedural ease. Results of this study indicate that dose measurements (relative to D{sub max}) in regions of steep dose gradient in the beam axis direction are possible with a precision (2 standard deviations [SDs]) as good as {+-} 1.2% using the technique. The dosimeter was developed and evaluated using variation to the experimental method. A clinically practical procedure was determined, resulting in measured surface dose of 20.4 {+-} 2% of the D{sub max} dose for a 10 x 10 cm{sup 2}, 80-cm source-to-surface distance (SSD), Theratron 780 Cobalt-60 ({sup 60}C) beam. Results obtained with TLD powder extrapolation compare favorably to other methods presented in the literature. The TLD powder extrapolation tool has been used clinically at the Northwestern Ontario Regional Cancer Centre (NWORCC) to measure surface dose effects under a number of conditions. Results from these measurements are reported. The method appears to be a simple and economical tool for surface dose measurement, particularly for facilities with TLD powder measurement capabilities.
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw
2016-04-01
Future Earth Orientation Parameters data are needed to compute real time transformation between the celestial and terrestrial reference frames. This transformation is realized by predictions of x, y pole coordinates data, UT1-UTC data and precesion-nutation extrapolation model. This paper is focused on the pole coordinates data prediction by combination of the least-squares (LS) extrapolation and autoregressive (AR) prediction models (LS+AR). The AR prediction which is applied to the LS extrapolation residuals of pole coordinates data does not able to predict all frequency bands of them and it is mostly tuned to predict subseasonal oscillations. The absolute values of differences between pole coordinates data and their LS+AR predictions increase with prediction length and depend mostly on starting prediction epochs, thus time series of these differences for 2, 4 and 8 weeks in the future were analyzed. Time frequency spectra of these differences for different prediction lengths are very similar showing some power in the frequency band corresponding to the prograde Chandler and annual oscillations, which means that the increase of prediction errors is caused by mismodelling of these oscillations by the LS extrapolation model. Thus, the LS+AR prediction method can be modified by taking into additional AR prediction correction computed from time series of these prediction differences for different prediction lengths. This additional AR prediction is mostly tuned to the seasonal frequency band of pole coordinates data.
Dioxin equivalency: Challenge to dose extrapolation
Brown, J.F. Jr.; Silkworth, J.B.
1995-12-31
Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably had elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.
Extrapolating Solar Dynamo Models Throughout the Heliosphere
NASA Astrophysics Data System (ADS)
Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.
2014-12-01
There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.
Extrapolation of toxic indices among test objects.
Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút
2010-12-01
Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180
Extrapolation methods for dynamic partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1978-01-01
Several extrapolation procedures are presented for increasing the order of accuracy in time for evolutionary partial differential equations. These formulas are based on finite difference schemes in both the spatial and temporal directions. On practical grounds the methods are restricted to schemes that are fourth order in time and either second, fourth or sixth order in space. For hyperbolic problems the second order in space methods are not useful while the fourth order methods offer no advantage over the Kreiss-Oliger method unless very fine meshes are used. Advantages are first achieved using sixth order methods in space coupled with fourth order accuracy in time. Computational results are presented confirming the analytic discussions.
Extrapolation of toxic indices among test objects
Tichý, Miloň; Rucki, Marián; Roth, Zdeněk; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút
2010-01-01
Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that a common physicochemical property governs the biological effect, namely the partition coefficient between two unmissible phases, simulated generally by n-octanol and water. This may mean that the transport of chemicals towards a target is responsible for the magnitude of the effect, rather than reactivity, as one would assume suppose. PMID:21331180
The Role of Motion Extrapolation in Amphibian Prey Capture
2015-01-01
Sensorimotor delays decouple behaviors from the events that drive them. The brain compensates for these delays with predictive mechanisms, but the efficacy and timescale over which these mechanisms operate remain poorly understood. Here, we assess how prediction is used to compensate for prey movement that occurs during visuomotor processing. We obtained high-speed video records of freely moving, tongue-projecting salamanders catching walking prey, emulating natural foraging conditions. We found that tongue projections were preceded by a rapid head turn lasting ∼130 ms. This motor lag, combined with the ∼100 ms phototransduction delay at photopic light levels, gave a ∼230 ms visuomotor response delay during which prey typically moved approximately one body length. Tongue projections, however, did not significantly lag prey position but were highly accurate instead. Angular errors in tongue projection accuracy were consistent with a linear extrapolation model that predicted prey position at the time of tongue contact using the average prey motion during a ∼175 ms period one visual latency before the head movement. The model explained successful strikes where the tongue hit the fly, and unsuccessful strikes where the fly turned and the tongue hit a phantom location consistent with the fly's earlier trajectory. The model parameters, obtained from the data, agree with the temporal integration and latency of retinal responses proposed to contribute to motion extrapolation. These results show that the salamander predicts future prey position and that prediction significantly improves prey capture success over a broad range of prey speeds and light levels. SIGNIFICANCE STATEMENT Neural processing delays cause actions to lag behind the events that elicit them. To cope with these delays, the brain predicts what will happen in the future. While neural circuits in the retina and beyond have been suggested to participate in such predictions, few behaviors have been
Frequency extrapolation by nonconvex compressive sensing
Chartrand, Rick; Sidky, Emil Y; Pan, Xiaochaun
2010-12-03
Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.
Hard hadronic collisions: extrapolation of standard effects
Ali, A.; Aurenche, P.; Baier, R.; Berger, E.; Douiri, A.; Fontannaz, M.; Humpert, B.; Ingelman, G.; Kinnunen, R.; Pietarinen, E.
1984-01-01
We study hard hadronic collisions for the proton-proton (pp) and the proton-antiproton (p anti p) option in the CERN LEP tunnel. Based on our current knowledge of hard collisions at the present CERN p anti p Collider, and with the help of quantum chromodynamics (QCD), we extrapolate to the next generation of hadron colliders with a centre-of-mass energy E/sub cm/ = 10 to 20 TeV. We estimate various signatures, trigger rates, event topologies, and associated distributions for a variety of old and new physical processes, involving prompt photons, leptons, jets, W/sup + -/ and Z bosons in the final state. We also calculate the maximum fermion and boson masses accessible at the LEP Hadron Collider. The standard QCD and electroweak processes studied here, being the main body of standard hard collisions, quantify the challenge of extracting new physics with hadron colliders. We hope that our estimates will provide a useful profile of the final states, and that our experimental physics colleagues will find this of use in the design of their detectors. 84 references.
Crawford, D.J.; Richmond, C.R.
1980-01-01
The rationale underlying interspecies extrapolation of metabolic data has been based primarily on pragmatic concerns. Little attention has been given to the extent to which such extrapolations have a firm epistemological basis. The strength of this approach for model-free (purely empirical) extrapolation and for extrapolation involving a variety of theoretical constructs is examined in this paper. An attempt is made to provide some understanding of the degree of confidence that can be placed in the extrapolation of metabolic data from one species to another. Published results for a wide variety of radionuclides are analyzed and the importance of these results to the field of nuclear medicine is explored. Problems inherent in the logic of extrapolation are then delineated in view of these historical data.
Direct Extrapolation of Biota-sediment Accumulation Factors (BSAFs)
Biota-sediment accumulation factors (BSAFs) for fish and shellfish were extrapolated directly from one location and species to other species, to other locations within a site, to other sites, and their combinations. The median errors in the extrapolations across species at a loc...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
An algorithm for a generalization of the Richardson extrapolation process
NASA Technical Reports Server (NTRS)
Ford, William F.; Sidi, Avram
1987-01-01
The paper presents a recursive method, designated the W exp (m)-algorithm, for implementing a generalization of the Richardson extrapolation process. Compared to the direct solution of the linear sytems of equations defining the extrapolation procedure, this method requires a small number of arithmetic operations and very little storage. The technique is also applied to solve recursively the coefficient problem associated with the rational approximations obtained by applying a d-transformation to power series. In the course of development a new recursive algorithm for implementing a very general extrapolation procedure is introduced, for solving the same problem. A FORTRAN program for the W exp (m)-algorithm is also appended.
Fully vectorial laser resonator modeling by vector extrapolation methods
NASA Astrophysics Data System (ADS)
Asoubar, Daniel; Kuhn, Michael; Wyrowski, Frank
2015-02-01
The optimization of multi-parameter resonators requires flexible simulation techniques beyond the scalar approximation. Therefore we generalize the scalar Fox and Li algorithm for the transversal eigenmode calculation to a fully vectorial model. This modified eigenvalue problem is solved by two polynomial-type vector extrapolation methods, namely the minimal polynomial extrapolation and the reduced rank extrapolation. Compared to other eigenvalue solvers these techniques can also be applied to resonators including nonlinear components. As an example we show the calculation of an azimuthally polarized eigenmode emitted by a resonator containing a discontinuous phase element and a nonlinear active medium. The simulation is verified by experiments.
3D Hail Size Distribution Interpolation/Extrapolation Algorithm
NASA Technical Reports Server (NTRS)
Lane, John
2013-01-01
Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.
Ecotoxicological effects assessment: A comparison of several extrapolation procedures
Okkerman, P.C.; v.d. Plassche, E.J.; Slooff, W.; Van Leeuwen, C.J.; Canton, J.H. , Bilthoven )
1991-04-01
In the future, extrapolation procedures will become more and more important for the effect assessment of compounds in aquatic systems. For achieving a reliable method these extrapolation procedures have to be evaluated thoroughly. As a first step three extrapolation procedures are compared by means of two sets of data, consisting of (semi)chronic and acute toxicity test results for 11 aquatic species and 8 compounds. Because of its statistical basis the extrapolation procedure of Van Straalen and Denneman is preferred over the procedures of the EPA and Stephan et al. The results of the calculations showed that lower numbers of toxicity data increase the chance of underestimating the risk of a compound. Therefore it is proposed to extend the OECD guidelines for algae, Daphnia, and fish with chronic (aquatic) toxicity tests for more species of different taxonomic groups.
Extrapolation technique pitfalls in asymmetry measurements at colliders
NASA Astrophysics Data System (ADS)
Colletti, Katrina; Hong, Ziqing; Toback, David; Wilson, Jonathan S.
2016-09-01
Asymmetry measurements are common in collider experiments and can sensitively probe particle properties. Typically, data can only be measured in a finite region covered by the detector, so an extrapolation from the visible asymmetry to the inclusive asymmetry is necessary. Often a constant multiplicative factor is advantageous for the extrapolation and this factor can be readily determined using simulation methods. However, there is a potential, avoidable pitfall involved in the determination of this factor when the asymmetry in the simulated data sample is small. We find that to obtain a reliable estimate of the extrapolation factor, the number of simulated events required rises as the inverse square of the simulated asymmetry; this can mean that an unexpectedly large sample size is required when determining the extrapolation factor.
NASA Astrophysics Data System (ADS)
Mirus, B. B.; Halford, K. J.; Sweetkind, D. S.; Fenelon, J.
2014-12-01
The utility of geologic frameworks for extrapolating hydraulic conductivities to length scales that are commensurate with hydraulic data has been assessed at the Nevada National Security Site in highly-faulted volcanic rocks. Observed drawdowns from eight, large-scale, aquifer tests on Pahute Mesa provided the necessary constraints to test assumed relations between hydraulic conductivity and interpretations of the geology. The investigated volume of rock encompassed about 40 cubic miles where drawdowns were detected more than 2 mi from pumping wells and traversed major fault structures. Five sets of hydraulic conductivities at about 500 pilot points were estimated by simultaneously interpreting all aquifer tests with a different geologic framework for each set. Each geologic framework was incorporated as prior information that assumed homogeneous hydraulic conductivities within each geologic unit. Complexity of the geologic frameworks ranged from an undifferentiated mass of rock with a single unit to 14 unique geologic units. Analysis of the model calibrations showed that a maximum of four geologic units could be differentiated where each was hydraulically unique as defined by the mean and standard deviation of log-hydraulic conductivity. Consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation were evaluated qualitatively with maps of transmissivity. Distributions of transmissivity were similar within the investigated extents regardless of geologic framework except for a transmissive streak along a fault in the Fault-Structure framework. Extrapolation was affected by underlying geologic frameworks where the variability of transmissivity increased as the number of units increased.
Implicit extrapolation methods for multilevel finite element computations
Jung, M.; Ruede, U.
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Extrapolation of carcinogenicity between species: Qualitative and quantitative factors
Gold, L.S. Univ. of California, Berkeley ); Manley, N.B.; Ames, B.N. )
1992-12-01
Prediction of human cancer risk from the results of rodent bioassays requires two types of extrapolation: a qualitative extrapolation from short-lived rodent species to long-lived humans, and a quantitative extrapolation from near-toxic doses in the bioassay to low-level human exposures. Experimental evidence on the accuracy of prediction between closely related species tested under similar experimental conditions (rats, mice, and hamsters) indicates that: (1) if a chemical is positive in one species, it will be positive in the second species about 75% of the time; however, since about 50% of test chemicals are positive in each species, by chance alone one would expect a predictive value between species of about 50%. (2) If a chemical induces tumors in a particular target organ in one species, it will induce tumors in the same organ in the second species about 50% of the time. Similar predictive values are obtained in an analysis of prediction from humans to rats or from humans to mice for known human carcinogens. Limitations of bioassay data for use in quantitative extrapolation are discussed, including constraints on both estimates of carcinogenic potency and of the dose-response in experiments with only two doses and a control. Quantitative extrapolation should be based on an understanding of mechanisms of carcinogenesis, particularly mitogenic effects that are present at high and not low doses.
Rule-based extrapolation: a continuing challenge for exemplar models.
Denton, Stephen E; Kruschke, John K; Erickson, Michael A
2008-08-01
Erickson and Kruschke (1998, 2002) demonstrated that in rule-plus-exception categorization, people generalize category knowledge by extrapolating in a rule-like fashion, even when they are presented with a novel stimulus that is most similar to a known exception. Although exemplar models have been found to be deficient in explaining rule-based extrapolation, Rodrigues and Murre (2007) offered a variation of an exemplar model that was better able to account for such performance. Here, we present the results of a new rule-plus-exception experiment that yields rule-like extrapolation similar to that of previous experiments, and yet the data are not accounted for by Rodrigues and Murre's augmented exemplar model. Further, a hybrid rule-and-exemplar model is shown to better describe the data. Thus, we maintain that rule-plus-exception categorization continues to be a challenge for exemplar-only models. PMID:18792504
Chiral Extrapolation of Lattice Data for Heavy Meson Hyperfine Splittings
X.-H. Guo; P.C. Tandy; A.W. Thomas
2006-03-01
We investigate the chiral extrapolation of the lattice data for the light-heavy meson hyperfine splittings D*-D and B*-B to the physical region for the light quark mass. The chiral loop corrections providing non-analytic behavior in m{sub {pi}} are consistent with chiral perturbation theory for heavy mesons. Since chiral loop corrections tend to decrease the already too low splittings obtained from linear extrapolation, we investigate two models to guide the form of the analytic background behavior: the constituent quark potential model, and the covariant model of QCD based on the ladder-rainbow truncation of the Dyson-Schwinger equations. The extrapolated hyperfine splittings remain clearly below the experimental values even allowing for the model dependence in the description of the analytic background.
Extrapolation method for the no-core shell model
NASA Astrophysics Data System (ADS)
Zhan, H.; Nogga, A.; Barrett, B. R.; Vary, J. P.; Navrátil, P.
2004-03-01
Nuclear many-body calculations are computationally demanding. An estimate of their accuracy is often hampered by the limited amount of computational resources even on present-day supercomputers. We provide an extrapolation method based on perturbation theory, so that the binding energy of a large basis-space calculation can be estimated without diagonalizing the Hamiltonian in this space. The extrapolation method is tested for 3H and 6Li nuclei. It will extend our computational abilities significantly and allow for reliable error estimates.
Efficient implementation of minimal polynomial and reduced rank extrapolation methods
NASA Technical Reports Server (NTRS)
Sidi, Avram
1990-01-01
The minimal polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) are two effective techniques that have been used in accelerating the convergence of vector sequences, such as those that are obtained from iterative solution of linear and nonlinear systems of equation. Their definitions involve some linear least squares problems, and this causes difficulties in their numerical implementation. Timewise efficient and numerically stable implementations for MPE and RRE are developed. A computer program written in FORTRAN 77 is also appended and applied to some model problems.
Analytic Approximations for the Extrapolation of Lattice Data
Masjuan, Pere
2010-12-22
We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.
Properties of infrared extrapolations in a harmonic oscillator basis
NASA Astrophysics Data System (ADS)
Coon, Sidney A.; Kruse, Michael K. G.
2016-02-01
The success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful, not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.
MODELING TOXICOKINETICS FOR CROSS-SPECIES EXTRAPOLATION OF DEVELOPMENTAL EFFECTS
Animal toxicology studies used to evaluate the potential for effects due to exposures during developmental periods are extrapolated to humans based upon the maternal exposure dose. The approach does not address whether the toxicokinetics are similar across species during the rel...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles,...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 18 2011-07-01 2011-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles,...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles,...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles,...
40 CFR 86.435-78 - Extrapolated emission values.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 19 2014-07-01 2014-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) CONTROL OF EMISSIONS FROM NEW AND IN-USE HIGHWAY VEHICLES AND ENGINES Emission Regulations for 1978 and Later New Motorcycles,...
An objective analysis technique for extrapolating tidal fields
NASA Technical Reports Server (NTRS)
Sanchez, B. V.
1984-01-01
An interpolation technique which allows accurate extrapolation of tidal height fields in the ocean basins by making use of selected satellite altimetry measurements and/or conventional gauge measurements was developed and tested. A normal mode solution for the Atlantic and Indian Oceans was obtained by means of a finite difference grid. Normal mode amplitude maps are presented.
Parallel solution of partial differential equations by extrapolation methods
Leland, Robert W.; Rollett, J. S.
2015-02-01
We have found, in the ROGE algorithm, an extrapolation process which is robust, effective and practically simple to implement. It removes the difficulty of needing to make a precise estimate of the over-relaxation parameter for Successive Over-Relaxation (SOR) type methods.
MULTIPLE SOLVENT EXPOSURE IN HUMANS: CROSS-SPECIES EXTRAPOLATIONS
Multiple Solvent Exposures in Humans:
Cross-Species Extrapolations
(Future Research Plan)
Vernon A. Benignus1, Philip J. Bushnell2 and William K. Boyes2
A few solvents can be safely studied in acute experiments in human subjects. Data exist in rats f...
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
NASA Astrophysics Data System (ADS)
Mirus, Benjamin B.; Halford, Keith; Sweetkind, Don; Fenelon, Joe
2016-02-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
NASA Astrophysics Data System (ADS)
Mirus, Benjamin B.; Halford, Keith; Sweetkind, Don; Fenelon, Joe
2016-08-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity ( K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.
Omelyan, I P
2006-09-01
A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782
Population movement studied at microscale: experience and extrapolation.
Chapman, M
1987-12-01
The context for this paper is the nature of generalization deriving from the study of particular cases of human behavior. With specific reference to field research on population movement conducted among individuals, households, small groups, and village communities in 3rd world societies, it challenges the convention that both generalization and extrapolation are based inevitably and exclusively on the number of events subject to examination. An evaluation is made of the methodological aspects of 4 different studies of population mobility at microscale, undertaken between 1965 and 1977 in the Solomon Islands and northwest Thailand. On this basis, integrated field designs that incorporate a range of intersecting instruments are favored for their technical flexibility and logical strength. With case studies of 3rd world villages, market centers, and urban neighborhoods, generalization and extrapolation is based on depth of understanding and power of theoretical connections. PMID:12315702
Extrapolation from occupational studies: a substitute for environmental epidemiology.
Enterline, P E
1981-01-01
Extrapolation from occupational data to general environmental exposures gives some interesting results, and these results might be useful in our decision-making process. These results could never be observed by environmental epidemiology and this method probably represents the only way of quantifying the health effects of low-exposure levels. Three linear models for extrapolating to low levels are presented--one from Canadian data, one from American data and one from British data. One or more of these is applied to two recently publicized asbestos exposures; exposures resulting from asbestos heat shields in hair dryers and exposures in public school buildings. Predictions are derived as to the effects of asbestos exposures on cancer mortality. A comparison is made between predictions made on the basis of a linear and nonlinear model. PMID:7333259
Survival extrapolation using the poly-Weibull model
Lunn, David; Sharples, Linda D
2015-01-01
Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472
Survival extrapolation using the poly-Weibull model.
Demiris, Nikolaos; Lunn, David; Sharples, Linda D
2015-04-01
Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472
An efficient method to evaluate energy variances for extrapolation methods
NASA Astrophysics Data System (ADS)
Puddu, G.
2012-08-01
The energy variance extrapolation method consists of relating the approximate energies in many-body calculations to the corresponding energy variances and inferring eigenvalues by extrapolating to zero variance. The method needs a fast evaluation of the energy variances. For many-body methods that expand the nuclear wavefunctions in terms of deformed Slater determinants, the best available method for the evaluation of energy variances scales with the sixth power of the number of single-particle states. We propose a new method which depends on the number of single-particle orbits and the number of particles rather than the number of single-particle states. We discuss as an example the case of 4He using the chiral N3LO interaction in a basis consisting up to 184 single-particle states.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Chiral extrapolation of the X(3872) binding energy
NASA Astrophysics Data System (ADS)
Baru, V.; Epelbaum, E.; Filin, A. A.; Gegelia, J.; Nefediev, A. V.
2016-02-01
The role of pion dynamics in the X(3872) charmonium-like state is studied in the framework of a renormalisable effective quantum field theory approach and they are found to play a substantial role in the formation of the X. Chiral extrapolation from the physical point to unphysically large pion masses is performed and the results are confronted with the lattice predictions. The proposed approach overrides the gap between the lattice calculations and the physical limit in mπ.
An efficient extrapolation to the (T)/CBS limit
NASA Astrophysics Data System (ADS)
Ranasinghe, Duminda S.; Barnes, Ericka C.
2014-05-01
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or "Wes1T-2Z") and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or "Wes1T-3Z"). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mEh, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mEh, ±2.37 mEh, and ±5.80 mEh, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C6H5Me+, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
Limitations on wind-tunnel pressure signature extrapolation
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Darden, Christine M.
1992-01-01
Analysis of some recent experimental sonic boom data has revived the hypothesis that there is a closeness limit to the near-field separation distance from which measured wind tunnel pressure signatures can be extrapolated to the ground as though generated by a supersonic-cruise aircraft. Geometric acoustic theory is used to derive an estimate of this distance and the sample data is used to provide a preliminary indication of practical separation distance values.
An efficient extrapolation to the (T)/CBS limit
Ranasinghe, Duminda S.; Barnes, Ericka C.
2014-05-14
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or “Wes1T-2Z”) and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or “Wes1T-3Z”). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mE{sub h}, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mE{sub h}, ±2.37 mE{sub h}, and ±5.80 mE{sub h}, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C{sub 6}H{sub 5}Me{sup +}, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
A simple extrapolation of thermodynamic perturbation theory to infinite order.
Ghobadi, Ahmadreza F; Elliott, J Richard
2015-09-21
Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A3/A2, where A(i) is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT). PMID:26395687
A simple extrapolation of thermodynamic perturbation theory to infinite order
Ghobadi, Ahmadreza F.; Elliott, J. Richard
2015-09-21
Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A{sub 3}/A{sub 2}, where A{sub i} is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT)
Villemot, François; Capelli, Riccardo; Colombo, Giorgio; van der Vaart, Arjan
2016-06-14
Improvements to the confinement method for the calculation of conformational free energy differences are presented. By taking advantage of phase space overlap between simulations at different frequencies, significant gains in accuracy and speed are reached. The optimal frequency spacing for the simulations is obtained from extrapolations of the confinement energy, and relaxation time analysis is used to determine time steps, simulation lengths, and friction coefficients. At postprocessing, interpolation of confinement energies is used to significantly reduce discretization errors in the calculation of conformational free energies. The efficiency of this protocol is illustrated by applications to alanine n-peptides and lactoferricin. For the alanine-n-peptide, errors were reduced between 2- and 10-fold and sampling times between 8- and 67-fold, while for lactoferricin the long sampling times at low frequencies were reduced 10-100-fold. PMID:27120438
Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil
2004-01-01
A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.
Verwichte, E.; Foullon, C.; White, R. S.; Van Doorsselaere, T.
2013-04-10
Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfven speed inside these loops. Through the period of oscillation and loop length, information about the Alfven speed inside each loop is deduced seismologically. This is compared with the Alfven speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfven speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects. PMID:19882150
Extrapolated HPGe efficiency estimates based on a single calibration measurement
Winn, W.G.
1994-07-01
Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency {element_of} for a uniform sample in a geometry with volume V is extrapolated from the measured {element_of}{sub 0} of the base sample of volume V{sub 0}. Assuming all samples are centered atop the detector for maximum efficiency, {element_of} decreases monotonically as V increases about V{sub 0}, and vice versa. Extrapolation of high and low efficiency estimates {element_of}{sub h} and {element_of}{sub L} provides an average estimate of {element_of} = 1/2 [{element_of}{sub h} + {element_of}{sub L}] {plus_minus} 1/2 [{element_of}{sub h} {minus} {element_of}{sub L}] (general) where an uncertainty D{element_of} = 1/2 ({element_of}{sub h} {minus} {element_of}{sub L}] brackets limits for a maximum possible error. The {element_of}{sub h} and {element_of}{sub L} both diverge from {element_of}{sub 0} as V deviates from V{sub 0}, causing D{element_of} to increase accordingly. The above concepts guided development of both conservative and refined estimates for {element_of}.
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, D.R.; Mayer, F.L.; Ellersieck, Mark R.; Asfaw, A.
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled "Ecological Risk Analysis" (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to
A critical note on extrapolated helium pair potentials
NASA Astrophysics Data System (ADS)
Klopper, Wim
2001-07-01
It is difficult, if not impossible, to extrapolate the helium pair potential to the limit of a complete basis to within the accuracy needed to improve significantly on existing, directly computed potentials. Even though the basis-set convergence of calculations in a correlation-consistent basis with cardinal number X is dominated by the X-3 term, it is important to account for energy terms that converge more rapidly than ∝X-3. The electron-correlation contribution to the potential will be overestimated noticeably when these terms are not properly taken into account.
Optical tomography by the temporally extrapolated absorbance method
NASA Astrophysics Data System (ADS)
Oda, Ichiro; Eda, Hideo; Tsunazawa, Yoshio; Takada, Michinosuke; Yamada, Yukio; Nishimura, Goro; Tamura, Mamoru
1996-01-01
The concept of the temporally extrapolated absorbance method (TEAM) for optical tomography of turbid media has been verified by fundamental experiments and image reconstruction. The TEAM uses the time-resolved spectroscopic data of the reference and object to provide projection data that are processed by conventional backprojection. Optical tomography images of a phantom consisting of axisymmetric double cylinders were experimentally obtained with the TEAM and time-gating and continuous-wave (CW) methods. The reconstructed TEAM images are compared with those obtained with the time-gating and CW methods and are found to have better spatial resolution.
Nonlinear Force-Free Field Extrapolation of NOAA AR 0696
NASA Astrophysics Data System (ADS)
Thalmann, J. K.; Wiegelmann, T.
2007-12-01
We investigate the 3D coronal magnetic field structure of NOAA AR 0696 in the period of November 09-11, 2004, before and after an X2.5 flare (occurring around 02:13 UT on November 10, 2004). The coronal magnetic field dominates the structure of the solar corona and consequently plays a key role for the understanding of the initiation of flares. The most accurate presently available method to derive the coronal magnetic field is currently the nonlinear force-free field extrapolation from measurements of the photospheric magnetic field vector. These vector-magnetograms were processed from stokes I, Q, U, and V measurements of the Big Bear Solar Observatory and extrapolated into the corona with the nonlinear force-free optimization code developed by Wiegelmann (2004). We analyze the corresponding time series of coronal equilibria regarding topology changes of the 3D coronal magnetic field during the flare. Furthermore, quantities such as the temporal evolution of the magnetic energy and helicity are computed.
Extrapolating W -associated jet-production ratios at the LHC
NASA Astrophysics Data System (ADS)
Bern, Z.; Dixon, L. J.; Febres Cordero, F.; Höche, S.; Ita, H.; Kosower, D. A.; Maître, D.
2015-07-01
Electroweak vector-boson production, accompanied by multiple jets, is an important background to searches for physics beyond the standard model. A precise and quantitative understanding of this process is helpful in constraining deviations from known physics. We study four key ratios in W +n -jet production at the LHC. We compute the ratio of cross sections for W +n - to W +(n -1 )-jet production as a function of the minimum jet transverse momentum. We also study the ratio differentially, as a function of the W -boson transverse momentum; as a function of the scalar sum of the jet transverse energy, HTjets; and as a function of certain jet transverse momenta. We show how to use such ratios to extrapolate differential cross sections to W +6 -jet production at next-to-leading order, and we cross-check the method against a direct calculation at leading order. We predict the differential distribution in HTjets for W +6 jets at next-to-leading order using such an extrapolation. We use the BlackHat software library together with SHERPA to perform the computations.
Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.
Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel
2016-09-01
We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions. PMID:27420562
Estimation of macro velocity models by wave field extrapolation
NASA Astrophysics Data System (ADS)
Cox, Hendricus Lambertus Hubertus
A method to estimate accurate macro velocity models for prediction of traveltimes of seismic waves in the earth's subsurface is developed. The sensitivity of prestack migration is used to estimate the model and since model errors are expressed in the quality of the migration result, the migration process itself can be used to determine these errors. Using an initial model, shot records are downward extrapolated to grid points (depth points) in the subsurface. The extrapolated data can be reordered into so called common depth point (CDP) gathers, image gathers and focus panels. The deviation from horizontal alignment is used to quantify the errors in the model and to apply update corrections accordingly. The analysis can be done before or after stacking over all shot records (CDP stacking). the previously mentioned focus panels are generated by CDP stacking. The alignment analysis reduces then to a simple focusing analysis. The examples discussed show that horizontal alignment gives accurate macro velocity models for prestack depth migration. Focus panels can be difficult to interpret in complicated situations, where it is impossible to converge to the correct solution with focus panels only. The process should be guided by macrogeologic models of the area. In complicated situations, a layer stripping strategy is preferred.
California's Proposition 65: extrapolating animal toxicity to humans.
Kilgore, W W
1990-01-01
In 1986, the voters of California passed a law regarding the concept of extrapolating animal toxicity data to humans. The California Safe Drinking Water and Toxic Enforcement Act of 1986, known as Proposition 65, does five things: 1. It creates a list of chemicals (including a number of agricultural chemicals) known to cause cancer or reproductive toxicity; 2. It limits discharges of listed chemicals to drinking water sources; 3. It requires prior warning before exposure to listed chemicals by anyone in the course of doing business; 4. It creates a list of chemicals requiring testing for carcinogenicity or reproductive toxicity; and 5. It requires the Governor to consult with qualified experts (a 12-member "Scientific Advisory Panel" was appointed) as necessary to carry out his duties. This paper discusses the details and implications of this proposition. Areas of responsibility have been assigned. The definition of significant risk is being addressed. PMID:2248253
Survival Extrapolation in the Presence of Cause Specific Hazards
Benaglia, Tatiana; Jackson, Christopher H.; Sharples, Linda D.
2016-01-01
Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Since causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators (ICD) for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from ICD. PMID:25413028
ERIC Educational Resources Information Center
Boudreaux, Gregory M.; Wells, M. Scott
2007-01-01
Everyone with a thorough knowledge of single variable calculus knows that integration can be used to find the length of a curve on a given interval, called its arc length. Fortunately, if one endeavors to pose and solve more interesting problems than simply computing lengths of various curves, there are techniques available that do not require an…
Theoretical Basis for Finite Difference Extrapolation of Sonic Boom Signatures
NASA Technical Reports Server (NTRS)
Plotkin, Kenneth J.
1996-01-01
Calculation of sonic boom signatures for aircraft has traditionally followed the methods of Whitham' and Walkden. The wave disturbance generated by the vehicle is obtained by area rule linearized supersonic flow methods, which yield a locally axisymmetric asymptotic solution. This solution is acoustic in nature, i.e., first order in disturbance quantities, and corresponds to ray acoustics. Cumulative nonlinear distortion of the signature is incorporated by using this solution to adjust propagation speed to first order, thus yielding a solution second order in disturbance quantities. The effects of atmospheric gradients are treated by Blokhintzov's method of geometrical acoustics. Both nonlinear signature evolution and ray tracing are applied as if the pressure field very close to the vehicle were actually that given by the source term (the 'F-function') of the asymptotic linearized flow solution. The viewpoint is thus that the flow solution exists at a small radius near the vehicle, and may be treated as an input to an extrapolation procedure consisting of ray tracing and nonlinear aging. The F-function is often regarded as a representation of a near-field pressure signature, and it is common for computational implementations to treat it interchangeably with the pressure signature. There is a 'matching radius' between the source function and the subsequent propagation extrapolation. This viewpoint has been supported by wind tunnel tests of simple models, and very typically yields correct results for actual flight vehicles. The assumption that the F-function and near-field signature are interchangeable is generally not correct. The flowfield of a vehicle which is not axisymmetric contains crossflow components which are very significant at small radii and less so at larger distances. From an acoustical viewpoint, the crossflow is equivalent to source diffraction portions of the wave field. Use of the F-function as a near field signature effectively assumes that the
Measuring Thermodynamic Length
Crooks, Gavin E
2007-09-07
Thermodynamic length is a metric distance between equilibrium thermodynamic states. Among other interesting properties, this metric asymptotically bounds the dissipation induced by a finite time transformation of a thermodynamic system. It is also connected to the Jensen-Shannon divergence, Fisher information, and Rao's entropy differential metric. Therefore, thermodynamic length is of central interestin understanding matter out of equilibrium. In this Letter, we will consider how to denethermodynamic length for a small system described by equilibrium statistical mechanics and how to measure thermodynamic length within a computer simulation. Surprisingly, Bennett's classic acceptance ratio method for measuring free energy differences also measures thermodynamic length.
Ramp response estimation and spectrum extrapolation for ultrasonic scattering
NASA Astrophysics Data System (ADS)
Clark, G. A.
1984-08-01
A combined application of digital signal processing, estimation theory, and scattering theory is used to attack the important problem of target identification. The basic problem is that of examining an object with an ultrasonic pulse and using the reflected signal to determine various properties of the object. Typically, an impulse reponse h(t) can be calculated from knowledge of the (input x(t)) signal and the reflected (output y(t)) signal. If the impulse response can be found, it can sometimes contain important information about the object. We have studied some new algorithms for impulse response estimation. It is also well-known that the ramp response contains information about the cross-sectional area of the scatterer. The ramp response can be calculated directly from the impulse response, but the estimate of cross-sectional area is degraded by the fact that the ultrasonic transducer severely bandlimits the data. Algorithms have been produced for extrapolating the ramp response spectrum to improve the cross-sectional area estimates from the ramp response technique. Experimental results demonstrating the estimation of properties of a known flaw (created by a saw cut) in a block of aluminum are presented.
Detail enhancement of blurred infrared images based on frequency extrapolation
NASA Astrophysics Data System (ADS)
Xu, Fuyuan; Zeng, Deguo; Zhang, Jun; Zheng, Ziyang; Wei, Fei; Wang, Tiedan
2016-05-01
A novel algorithm for enhancing the details of the blurred infrared images based on frequency extrapolation has been raised in this paper. Unlike other researchers' work, this algorithm mainly focuses on how to predict the higher frequency information based on the Laplacian pyramid separation of the blurred image. This algorithm uses the first level of the high frequency component of the pyramid of the blurred image to reverse-generate a higher, non-existing frequency component, and adds back to the histogram equalized input blurred image. A simple nonlinear operator is used to analyze the extracted first level high frequency component of the pyramid. Two critical parameters are participated in the calculation known as the clipping parameter C and the scaling parameter S. The detailed analysis of how these two parameters work during the procedure is figure demonstrated in this paper. The blurred image will become clear, and the detail will be enhanced due to the added higher frequency information. This algorithm has the advantages of computational simplicity and great performance, and it can definitely be deployed in the real-time industrial applications. We have done lots of experiments and gave illustrations of the algorithm's performance in this paper to convince its effectiveness.
An empirical relationship for extrapolating sparse experimental lap joint data.
Segalman, Daniel Joseph; Starr, Michael James
2010-10-01
Correctly incorporating the influence of mechanical joints in built-up mechanical systems is a critical element for model development for structural dynamics predictions. Quality experimental data are often difficult to obtain and is rarely sufficient to determine fully parameters for relevant mathematical models. On the other hand, fine-mesh finite element (FMFE) modeling facilitates innumerable numerical experiments at modest cost. Detailed FMFE analysis of built-up structures with frictional interfaces reproduces trends among problem parameters found experimentally, but there are qualitative differences. Those differences are currently ascribed to the very approximate nature of the friction model available in most finite element codes. Though numerical simulations are insufficient to produce qualitatively correct behavior of joints, some relations, developed here through observations of a multitude of numerical experiments, suggest interesting relationships among joint properties measured under different loading conditions. These relationships can be generalized into forms consistent with data from physical experiments. One such relationship, developed here, expresses the rate of energy dissipation per cycle within the joint under various combinations of extensional and clamping load in terms of dissipation under other load conditions. The use of this relationship-though not exact-is demonstrated for the purpose of extrapolating a representative set of experimental data to span the range of variability observed from real data.
Impact ejecta dynamics in an atmosphere - Experimental results and extrapolations
NASA Technical Reports Server (NTRS)
Schultz, P. H.; Gault, D. E.
1982-01-01
It is noted that the impacts of 0.635-cm aluminum projectiles at 6 km/sec into fine pumice dust, at 1 atm, generate a ball of ionized gas behind an expanding curtain of upward moving ejecta. The gas ball forms a toroid which dissolves as it is driven along the interior of the ejecta curtain, by contrast to near-surface explosions in which a fireball envelops early-time crater growth. High frame rate Schlieren photographs show that the atmosphere at the base of the ejecta curtain is initially turbulent, but later forms a vortex. These experiments suggest that although small size ejecta may be decelerated by air drag, they are not simply lofted and suspended but become incorporated in an ejecta cloud that is controlled by air flow which is produced by the response of the atmosphere to the impact. The extrapolation of these results to large body impacts on the earth suggests such contrasts with laboratory experiments as a large quantity of impact-generated vapor, the supersonic advance of the ejecta curtain, the lessened effect of air drag due to the tenuous upper atmosphere, and the role of secondary cratering.
Biological bases for cancer dose-response extrapolation procedures
Wilson, J.D. )
1991-01-01
The Moolgavkar-Knudson theory of carcinogenesis of 1981 incorporates the viable portions of earlier multistage theories and provides the basis for both the linearized multistage and biologically based dose-response extrapolation methodologies. This theory begins with the premise that cancer occurs because irreversible genetic changes (mutations) are required for transformation of normal cells to cancer cells; incidence data are consistent with only two critical changes begin required, but a small contribution from three or higher mutation pathways cannot be rules out. Events or agents that increase the rate of cell division also increase the probability that one of these critical mutations will occur by reducing the time available for repair of DNA lesions before mitosis. The DNA lesions can occur from background causes or from treatment with mutagenic agents. Thus, the equations describing incidence as a function of exposure to carcinogenic agents include two separate terms, one accounting for mutagenic and one for mitogenic stimuli. At high exposures these interact, producing synergism and high incidence rates, but at low exposures they are effectively independent. The multistage models that are now used include only terms corresponding to the mutagenic stimuli and this fail to adequately describe incidence at high dose rates. Biologically based models attempt to include mitogenic effects, as well; they are usually limited by data availability.
Interspecies Gene Name Extrapolation--A New Approach.
Petric, Roxana Cojocneanu; Braicu, Cornelia; Bassi, Cristian; Pop, Laura; Taranu, Ionelia; Dragos, Nicolae; Dumitrascu, Dan; Negrini, Massimo; Berindan-Neagoe, Ioana
2015-01-01
The use of animal models has facilitated numerous scientific developments, especially when employing "omics" technologies to study the effects of various environmental factors on humans. Our study presents a new bioinformatics pipeline suitable when the generated microarray data from animal models does not contain the necessary human gene name annotation. We conducted single color gene expression microarray on duodenum and spleen tissue obtained from pigs which have been exposed to zearalenone and Escherichia coli contamination, either alone or combined. By performing a combination of file format modifications and data alignments using various online tools as well as a command line environment, we performed the pig to human gene name extrapolation with an average yield of 58.34%, compared to 3.64% when applying more simple methods. In conclusion, while online data analysis portals on their own are of great importance in data management and assessment, our new pipeline provided a more effective approach for a situation which can be frequently encountered by researchers in the "omics" era. PMID:26407293
Time-domain incident-field extrapolation technique based on the singularity-expansion method
Klaasen, J.J.
1991-05-01
In this report, a method presented to extrapolate measurements from Nuclear Electromagnetic Pulse (NEMP) assessments directly in the time domain. This method is based on a time-domain extrapolation function which is obtained from the Singularity Expansion Method representation of the measured incident field of the NEMP simulator. Once the time-domain extrapolation function is determined, the responses recorded during an assessment can be extrapolated simply by convolving them with the time domain extrapolation function. It is found that to obtain useful extrapolated responses, the incident field measurements needs to be made minimum phase; otherwise unbounded results can be obtained. Results obtained with this technique are presented, using data from actual assessments.
Complexities of glucuronidation affecting in vitro in vivo extrapolation.
Lin, Jiunn H; Wong, Bradley K
2002-12-01
Glucuronidation is responsible for the clearance of a diverse range of drug and chemicals whose topology confers properties that complicate in vitro-in vivo clearance correlations as compared to those possible for oxidative metabolism. The active site of the UGTs faces the inside of the luminal space of the endoplasmic reticulum, thus presenting diffusional barriers for substrates, the cosubstrate, UDPGA, and resultant glucuronide products. Transport processes for the cosubstrate UDPGA and glucuronidated products likely contribute to the well-known latency phenomena in which exogenous detergents or alamethicin are required for maximal UGT activity in microsomes. This complicates the extrapolation of results of in vitro clearance studies to the in vivo situation. Even with activation, the microsomal-based clearance values still underestimate the actual in vivo UGT-mediated clearance; therefore latency is not the only explanation for the poor correlation. Recent data indicate that hepatocytes are a promising in vitro system that can be used for the early evaluation of human clearance behavior of drug candidates. Both induction and inhibition of UGT-mediated clearance are a source of clinical drug-drug interactions. Emerging evidence indicates that the same mechanisms identified in the regulation of CYP enzymes also are involved in regulation of the UGTs, i.e., CAR, AH and probably PXR mediate regulation of UGT1A1, 1A6 and UGT2B7, respectively. In contrast to CYP-mediated interactions, with a few exceptions, the magnitude of UGT-mediated interactions are less than 2-fold because of the relatively high UGT Km values and substrate overlap among the multiple isozymes. PMID:12369890
Trinkaus, Erik; Holliday, Trenton W.; Auerbach, Benjamin M.
2014-01-01
The Late Pleistocene archaic humans from western Eurasia (the Neandertals) have been described for a century as exhibiting absolutely and relatively long clavicles. This aspect of their body proportions has been used to distinguish them from modern humans, invoked to account for other aspects of their anatomy and genetics, used in assessments of their phylogenetic polarities, and used as evidence for Late Pleistocene population relationships. However, it has been unclear whether the usual scaling of Neandertal clavicular lengths to their associated humeral lengths reflects long clavicles, short humeri, or both. Neandertal clavicle lengths, along with those of early modern humans and latitudinally diverse recent humans, were compared with both humeral lengths and estimated body masses (based on femoral head diameters). The Neandertal do have long clavicles relative their humeri, even though they fall within the ranges of variation of early and recent humans. However, when scaled to body masses, their humeral lengths are relatively short, and their clavicular lengths are indistinguishable from those of Late Pleistocene and recent modern humans. The few sufficiently complete Early Pleistocene Homo clavicles seem to have relative lengths also well within recent human variation. Therefore, appropriately scaled clavicular length seems to have varied little through the genus Homo, and it should not be used to account for other aspects of Neandertal biology or their phylogenetic status. PMID:24616525
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
The evolution of σ γP with coherence length
NASA Astrophysics Data System (ADS)
Caldwell, Allen
2016-07-01
Assuming the form {σ }γ {{P}}\\propto {l}{λ {{eff}}} at fixed Q 2 for the behavior of the virtual-photon proton scattering cross section, where l is the coherence length of the photon fluctuations, it is seen that the extrapolated values of {σ }γ {{P}} for different Q 2 cross for l≈ {10}8 fm. It is argued that this behavior is not physical, and that the behavior of the cross sections must change before this coherence length l is reached. This could set the scale for the onset of saturation of the parton densities in the photon, and thereby saturation of parton densities in general.
Cross-species extrapolation of chemical effects: Challenges and new insights
One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...
NASA Astrophysics Data System (ADS)
Fang, Jun; Gao, Xingyu; Song, Haifeng; Wang, Han
2016-06-01
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn-Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
Coefficients of Effective Length.
ERIC Educational Resources Information Center
Edwards, Roger H.
1981-01-01
Under certain conditions, a validity Coefficient of Effective Length (CEL) can produce highly misleading results. A modified coefficent is suggested for use when empirical studies indicate that underlying assumptions have been violated. (Author/BW)
ERIC Educational Resources Information Center
Martins, Roberto de A.
1978-01-01
Describes a thought experiment using a general analysis approach with Lorentz transformations to show that the apparent self-contradictions of special relativity concerning the length-paradox are really non-existant. (GA)
[Sonographic leg length measurement].
Holst, A; Thomas, W
1989-03-01
After brief presentation of the clinical and radiological methods to measure the leg length and the leg length difference the authors outline the new diagnostic method for measuring the leg length and the leg length difference by means of real time sonography. Postmortem tests and clinical examples show that ultrasound is ideal to determine exactly the length of femur and tibia. The joint gaps on the hip, knee and upper ankle joint can be demonstrated by means of a 5 MHz linear scanner. A 1 mm strong metal bar on the skin and under the scanner is placed at right angles to the longitudinal axis of the body so that the bar can be seen in the centre. A measuring device gives the distances of the joint gaps in cm so that the difference correspond to the real length of femur and tibia. This standardised measuring is done by a particularly developed bearing and measuring device. The results of the sonographical measurements on 20 corpses and checking after consecutive dissections showed in 75% of cases a 100% sonographic measuring accuracy of the total leg length. The separately considered results for femur (85%) and tibia (90) were even better. The maximum sonographic measuring fault was 1.0 cm for the femur (in one case) and 0.5 cm for the tibia, respectively. All sonographic measurements were performed with the Sonoline SL-1 of the Siemens Company (Erlangen, W-Germany). Thus, sonographical measuring of the leg length offers a reliable, non-invasive method that can be repeated as often as necessary and is simply executed.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2652268
Sprouse, Gene D.
2011-07-15
Technological changes have moved publishing to electronic-first publication where the print version has been relegated to simply another display mode. Distribution in HTML and EPUB formats, for example, changes the reading environment and reduces the need for strict pagination. Therefore, in an effort to streamline the calculation of length, the APS journals will no longer use the printed page as the determining factor for length. Instead the journals will now use word counts (or word equivalents for tables, figures, and equations) to establish length; for details please see http://publish.aps.org/authors/length-guide. The title, byline, abstract, acknowledgment, and references will not be included in these counts allowing authors the freedom to appropriately credit coworkers, funding sources, and the previous literature, bringing all relevant references to the attention of readers. This new method for determining length will be easier for authors to calculate in advance, and lead to fewer length-associated revisions in proof, yet still retain the quality of concise communication that is a virtue of short papers.
Area, length and thickness conservation: Dogma or reality?
NASA Astrophysics Data System (ADS)
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
Senn, Florian Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke
2014-01-28
The melting of argon clusters Ar{sub N} is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?
Feng, Y. Lin, S.; Huang, S.; Shrestha, S.; Conibeer, G.
2015-03-28
Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation gives a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.
Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...
CONSTRAINING THREE-DIMENSIONAL MAGNETIC FIELD EXTRAPOLATIONS USING THE TWIN PERSPECTIVES OF STEREO
Conlon, Paul A.; Gallagher, Peter T.
2010-05-20
The three-dimensional magnetic topology of a solar active region (NOAA 10956) was reconstructed using a linear force-free field extrapolation constrained using the twin perspectives of STEREO. A set of coronal field configurations was initially generated from extrapolations of the photospheric magnetic field observed by the Michelson Doppler Imager on SOHO. Using an EUV intensity-based cost function, the extrapolated field lines that were most consistent with 171 A passband images from the Extreme UltraViolet Imager on STEREO were identified. This facilitated quantitative constraints to be placed on the twist ({alpha}) of the extrapolated field lines, where {nabla} x B = {alpha}B. Using the constrained values of {alpha}, the evolution in time of twist, connectivity, and magnetic energy were then studied. A flux emergence event was found to result in significant changes in the magnetic topology and total magnetic energy of the region.
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
Document Length Normalization.
ERIC Educational Resources Information Center
Singhal, Amit; And Others
1996-01-01
Describes a study that investigated document retrieval relevance based on document length in an experimental text collection. Topics include term weighting and document ranking, retrieval strategies such as the vector-space cosine match, and a modified technique called the pivoted cosine normalization. (LRW)
NASA Astrophysics Data System (ADS)
Tay, Kim Gaik; Kek, Sie Long; Abdul-Kahar, Rosmila
2015-05-01
In this paper, we have further improved the limitations of our previous two Richardson's extrapolation spreadsheet calculators for computing differentiations numerically. The new feature in this new Richardson's extrapolation spreadsheet calculator is fully automated up to any level based on the stopping criteria using VBA programming. The new version is more flexible because it is controlled by programming. Furthermore, it reduces computational time and CPU memory.
How useful are corpus-based methods for extrapolating psycholinguistic variables?
Mandera, Paweł; Keuleers, Emmanuel; Brysbaert, Marc
2015-01-01
Subjective ratings for age of acquisition, concreteness, affective valence, and many other variables are an important element of psycholinguistic research. However, even for well-studied languages, ratings usually cover just a small part of the vocabulary. A possible solution involves using corpora to build a semantic similarity space and to apply machine learning techniques to extrapolate existing ratings to previously unrated words. We conduct a systematic comparison of two extrapolation techniques: k-nearest neighbours, and random forest, in combination with semantic spaces built using latent semantic analysis, topic model, a hyperspace analogue to language (HAL)-like model, and a skip-gram model. A variant of the k-nearest neighbours method used with skip-gram word vectors gives the most accurate predictions but the random forest method has an advantage of being able to easily incorporate additional predictors. We evaluate the usefulness of the methods by exploring how much of the human performance in a lexical decision task can be explained by extrapolated ratings for age of acquisition and how precisely we can assign words to discrete categories based on extrapolated ratings. We find that at least some of the extrapolation methods may introduce artefacts to the data and produce results that could lead to different conclusions that would be reached based on the human ratings. From a practical point of view, the usefulness of ratings extrapolated with the described methods may be limited. PMID:25695623
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost. PMID:27305677
The accuracy of far-field noise obtained by the mathematical extrapolation of near-field noise data
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Karel, S.
1975-01-01
Results are presented for an analytical study of the accuracy and limitations of a technique that permits the mathematical extrapolation of near-field noise data to far-field conditions. The effects of the following variables on predictive accuracy of the far-field pressure were examined: (1) number of near-field microphones; (2) length of source distribution; (3) complexity of near-field and far-field distributions; (4) source-to-microphone distance; and (5) uncertainties in microphone data and imprecision in the location of the near-field microphones. It is shown that the most important parameters describing predictive accuracy are the number of microphones, the ratio of source length to acoustic wavelength (L/lambda), and the error in location of near-field microphones. For maximum microphone location errors of plus or minus 1 cm, only an accuracy of plus or minus 2.5 dB can be attained with approximately 40 microphones for the highest L/lambda of 10.
The accuracy of far-field noise obtained by the mathematical extrapolation of near-field noise data
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Karel, S.
1975-01-01
Results are described of an analytical study of the accuracy and limitations of a technique that permits the mathematical extrapolation of near-field noise data to far-field conditions. The effects of the following variables on predictive accuracy of the far-field pressure were examined: (1) number of near-field microphones; (2) length of source distribution; (3) complexity of near-field and far-field distributions; (4) source-to-microphone distance; and (5) uncertainties in microphone data and imprecision in the location of the near-field microphones. It is shown that the most important parameters describing predictive accuracy are the number of microphones, the ratio of source length to acoustic wavelength, (L/wavelength), and the error in location of near-field microphones. If microphone measurement and location errors are not included, then far-field pressures can be accurately predicted up to L/wavelength values of 15 using approximately 50 microphones. For maximum microphone location errors of + or - 1 cm, only an accuracy of + or - 2-1/2 db can be attained with approximately 40 microphones for the highest L/wavelength of 10.
The K+ K+ scattering length from Lattice QCD
Silas Beane; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud
2007-09-11
The K+K+ scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the MILC asqtad-improved gauge configurations with fourth-rooted staggered sea quarks. Three-flavor mixed-action chiral perturbation theory at next-to-leading order, which includes the leading effects of the finite lattice spacing, is used to extrapolate the results of the lattice calculation to the physical value of mK + /fK + . We find mK^+ aK^+ K^+ = â~0.352 Â± 0.016, where the statistical and systematic errors have been combined in quadrature.
The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James
2007-01-01
The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.
Inoue, S.; Magara, T.; Choe, G. S.; Kim, K. S.; Pandey, V. S.; Shiota, D.; Kusano, K.
2014-01-01
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ∇ · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.
NASA Technical Reports Server (NTRS)
Lueck, Dale E. (Inventor)
1994-01-01
Payload customers for the Space Shuttle have recently expressed concerns about the possibility of their payloads at an adjacent pad being contaminated by plume effluents from a shuttle at an active pad as they await launch on an inactive pad. As part of a study to satisfy such concerns a ring of inexpensive dosimeters was deployed around the active pad at the inter-pad distance. However, following a launch, dosimeters cannot be read for several hours after the exposure. As a consequence factors such as different substrates, solvent systems, and possible volatilization of HCl from the badges were studied. This observation led to the length of stain (LOS) dosimeters of this invention. Commercial passive LOS dosimeters are sensitive only to the extent of being capable of sensing 2 ppm to 20 ppm if the exposure is 8 hours. To map and quantitate the HCl generated by Shuttle launches, and in the atmosphere within a radius of 1.5 miles from the active pad, a sensitivity of 2 ppm HCl in the atmospheric gases on an exposure of 5 minutes is required. A passive length of stain dosimeter has been developed having a sensitivity rendering it capable of detecting a gas in a concentration as low as 2 ppm on an exposure of five minutes.
Larsen, Ross E.
2016-04-12
In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less
Forced Field Extrapolation of the Magnetic Structure of the Hα fibrils in the Solar Chromosphere
NASA Astrophysics Data System (ADS)
Xiaoshuai, Zhu; Huaning, Wang; Zhanle, Du; Han, He
2016-07-01
We present a careful assessment of forced field extrapolation using the Solar Dynamics Observatory/Helioseismic and Magnetic Imager magnetogram. We use several metrics to check the convergence property. The extrapolated field lines below 3600 km appear to be aligned with most of the Hα fibrils observed by the New Vacuum Solar Telescope. In the region where magnetic energy is far larger than potential energy, the field lines computed by forced field extrapolation are still consistent with the patterns of Hα fibrils while the nonlinear force-free field results show a large misalignment. The horizontal average of the lorentz force ratio shows that the forced region where the force-free assumption fails can reach heights of 1400–1800 km. The non-force-free state of the chromosphere is also confirmed based on recent radiation magnetohydrodynamics simulations.
NASA Astrophysics Data System (ADS)
Sandu, Adrian; Schlick, Tamar
1999-05-01
Numerical resonance artifacts have become recognized recently as a limiting factor to increasing the timestep in multiple-timestep (MTS) biomolecular dynamics simulations. At certain timesteps correlated to internal motions (e.g., 5 fs, around half the period of the fastest bond stretch, Tmin), visible inaccuracies or instabilities can occur. Impulse-MTS schemes are vulnerable to these resonance errors since large energy pulses are introduced to the governing dynamics equations when the slow forces are evaluated. We recently showed that such resonance artifacts can be masked significantly by applying extrapolative splitting to stochastic dynamics. Theoretical and numerical analyses of force-splitting integrators based on the Verlet discretization are reported here for linear models to explain these observations and to suggest how to construct effective integrators for biomolecular dynamics that balance stability with accuracy. Analyses for Newtonian dynamics demonstrate the severe resonance patterns of the Impulse splitting, with this severity worsening with the outer timestep, Δ t; Constant Extrapolation is generally unstable, but the disturbances do not grow with Δ t. Thus, the stochastic extrapolative combination can counteract generic instabilities and largely alleviate resonances with a sufficiently strong Langevin heat-bath coupling (γ), estimates for which are derived here based on the fastest and slowest motion periods. These resonance results generally hold for nonlinear test systems: a water tetramer and solvated protein. Proposed related approaches such as Extrapolation/Correction and Midpoint Extrapolation work better than Constant Extrapolation only for timesteps less than Tmin/2. An effective extrapolative stochastic approach for biomolecules that balances long-timestep stability with good accuracy for the fast subsystem is then applied to a biomolecule using a three-class partitioning: the medium forces are treated by Midpoint Extrapolationvia
Small angle x-ray scattering of chromatin. Radius and mass per unit length depend on linker length.
Williams, S P; Langmore, J P
1991-01-01
Analyses of low angle x-ray scattering from chromatin, isolated by identical procedures but from different species, indicate that fiber diameter and number of nucleosomes per unit length increase with the amount of nucleosome linker DNA. Experiments were conducted at physiological ionic strength to obtain parameters reflecting the structure most likely present in living cells. Guinier analyses were performed on scattering from solutions of soluble chromatin from Necturus maculosus erythrocytes (linker length 48 bp), chicken erythrocytes (linker length 64 bp), and Thyone briareus sperm (linker length 87 bp). The results were extrapolated to infinite dilution to eliminate interparticle contributions to the scattering. Cross-sectional radii of gyration were found to be 10.9 +/- 0.5, 12.1 +/- 0.4, and 15.9 +/- 0.5 nm for Necturus, chicken, and Thyone chromatin, respectively, which are consistent with fiber diameters of 30.8, 34.2, and 45.0 nm. Mass per unit lengths were found to be 6.9 +/- 0.5, 8.3 +/- 0.6, and 11.8 +/- 1.4 nucleosomes per 10 nm for Necturus, chicken, and Thyone chromatin, respectively. The geometrical consequences of the experimental mass per unit lengths and radii of gyration are consistent with a conserved interaction among nucleosomes. Cross-linking agents were found to have little effect on fiber external geometry, but significant effect on internal structure. The absolute values of fiber diameter and mass per unit length, and their dependencies upon linker length agree with the predictions of the double-helical crossed-linker model. A compilation of all published x-ray scattering data from the last decade indicates that the relationship between chromatin structure and linker length is consistent with data obtained by other investigators. Images FIGURE 1 PMID:2049522
NASA Technical Reports Server (NTRS)
Mendelson, A.; Manson, S. S.
1960-01-01
A method using finite-difference recurrence relations is presented for direct extrapolation of families of curves. The method is illustrated by applications to creep-rupture data for several materials and it is shown that good results can be obtained without the necessity for any of the usual parameter concepts.
Extrapolation of inhaled particulate toxicity data from experimental animals to humans. Final report
Morgan, D.L.; Steele, V.E.; Hatch, G.E.
1988-02-01
Significant progress has been made over the past three years to develop methodology and assess various tissues and tissue-sensitivity endpoints with the ultimate goal of validating a proposed extrapolation model. This model allows the quantitative extrapolation of inhaled particulate toxicology data from experimental animals to man. Methodology was developed to accurately measure nucleotide levels in small tissue samples, to determine cellular toxicant levels, to isolate tissue or cells at various levels of the respiratory tract, and to culture animal and human cells for identical treatment conditions. The tissues assessed were nasal turbinate epithelial, olfactory epithelial, and alveolar macrophages. Nasal and pulmonary lavage fluids were also studied. These comprehensive, comparative animal-to-human extrapolation studies were unique in that, for the first time, tissue response was measured as a function of actual cellular dose, and common endpoints were used for the same target-cell types in different species. The described cell-sensitivity model could then be used to provide quantitative animal-to-human extrapolation values for inhaled particulates, which will be extremely valuable for human risk assessment.
Jager, Tjalling; Klok, Chris
2010-11-12
The interest of environmental management is in the long-term health of populations and ecosystems. However, toxicity is usually assessed in short-term experiments with individuals. Modelling based on dynamic energy budget (DEB) theory aids the extraction of mechanistic information from the data, which in turn supports educated extrapolation to the population level. To illustrate the use of DEB models in this extrapolation, we analyse a dataset for life cycle toxicity of copper in the earthworm Dendrobaena octaedra. We compare four approaches for the analysis of the toxicity data: no model, a simple DEB model without reserves and maturation (the Kooijman-Metz formulation), a more complex one with static reserves and simplified maturation (as used in the DEBtox software) and a full-scale DEB model (DEB3) with explicit calculation of reserves and maturation. For the population prediction, we compare two simple demographic approaches (discrete time matrix model and continuous time Euler-Lotka equation). In our case, the difference between DEB approaches and population models turned out to be small. However, differences between DEB models increased when extrapolating to more field-relevant conditions. The DEB3 model allows for a completely consistent assessment of toxic effects and therefore greater confidence in extrapolating, but poses greater demands on the available data. PMID:20921051
A NEW CODE FOR NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF THE GLOBAL CORONA
Jiang Chaowei; Feng Xueshang; Xiang Changqing
2012-08-10
Reliable measurements of the solar magnetic field are still restricted to the photosphere, and our present knowledge of the three-dimensional coronal magnetic field is largely based on extrapolations from photospheric magnetograms using physical models, e.g., the nonlinear force-free field (NLFFF) model that is usually adopted. Most of the currently available NLFFF codes have been developed with computational volume such as a Cartesian box or a spherical wedge, while a global full-sphere extrapolation is still under development. A high-performance global extrapolation code is in particular urgently needed considering that the Solar Dynamics Observatory can provide a full-disk magnetogram with resolution up to 4096 Multiplication-Sign 4096. In this work, we present a new parallelized code for global NLFFF extrapolation with the photosphere magnetogram as input. The method is based on the magnetohydrodynamics relaxation approach, the CESE-MHD numerical scheme, and a Yin-Yang spherical grid that is used to overcome the polar problems of the standard spherical grid. The code is validated by two full-sphere force-free solutions from Low and Lou's semi-analytic force-free field model. The code shows high accuracy and fast convergence, and can be ready for future practical application if combined with an adaptive mesh refinement technique.
NASA Technical Reports Server (NTRS)
Kahn, M. M. S.; Cahill, J. F.
1983-01-01
Use of this analytical parameter, it is shown, highlights the distinction between cases which are dominated by trailing-edge separation, and those for which separation at the shock foot is dominant. Use of the analytical parameter and the distinction noted above greatly improves the correlation of separation data and the extrapolation of wind tunnel data to flight conditions.
Improvement of the Quality of Reconstructed Holographic Images by Extrapolation of Digital Holograms
NASA Astrophysics Data System (ADS)
Dyomin, V. V.; Olshukov, A. S.
2016-02-01
The work is devoted to investigation of noise in reconstructed holographic images in the form of a system of fringes parallel to the hologram frame boundaries. Mathematical and physical interpretation is proposed together with an algorithm for reduction of this effect by extrapolation of digital holograms using bicubic splines. The efficiency of the algorithm is estimated and examples of its application are presented.
Route-to-route extrapolation of the toxic potency of MTBE.
Dourson, M L; Felter, S P
1997-12-01
MTBE is a volatile organic compound used as an oxygenating agent in gasoline. Inhalation from fumes while refueling automobiles is the principle route of exposure for humans, and toxicity by this route has been well studied. Oral exposures to MTBE exist as well, primarily due to groundwater contamination from leaking stationary sources, such as underground storage tanks. Assessing the potential public health impacts of oral exposures to MTBE is problematic because drinking water studies do not exist for MTBE, and the few oil-gavage studies from which a risk assessment could be derived are limited. This paper evaluates the suitability of the MTBE database for conducting an inhalation route-to-oral route extrapolation of toxicity. This includes evaluating the similarity of critical effect between these two routes, quantifiable differences in absorption, distribution, metabolism, and excretion, and sufficiency of toxicity data by the inhalation route. We conclude that such an extrapolation is appropriate and have validated the extrapolation by finding comparable toxicity between a subchronic gavage oral bioassay and oral doses we extrapolate from a subchronic inhalation bioassay. Our results are extended to the 2-year inhalation toxicity study by Chun et al. (1992) in which rats were exposed to 0, 400, 3000, or 8000 ppm MTBE for 6 hr/d, 5 d/wk. We have estimated the equivalent oral doses to be 0, 130, 940, or 2700 mg/kg/d. These equivalent doses may be useful in conducting noncancer and cancer risk assessments. PMID:9463928
THE MISUSE OF HYDROLOGIC UNIT MAPS FOR EXTRAPOLATION, REPORTING, AND ECOSYSTEM MANAGEMENT
The use of watersheds to conduct research on land-water relationships has expanded recently to include both extrapolation and reporting of water resource information and ecosystem management. More often than not, hydrologic units, and hydrologic unit codes (HUCs) in particular, a...
Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Extrapolation of multiplicity distribution in p+p(\\bar{p}) collisions to LHC energies
NASA Astrophysics Data System (ADS)
Dash, Ajay Kumar; Mohanty, Bedangadas
2010-02-01
The multiplicity (Nch) and pseudorapidity distribution (dNch/dη) of primary charged particles in p + p collisions at Large Hadron Collider (LHC) energies of \\sqrt{s} = 10 and 14 TeV are obtained from extrapolation of existing measurements at lower \\sqrt{s}. These distributions are then compared to calculations from PYTHIA and PHOJET models. The existing \\sqrt{s} measurements are unable to distinguish between a logarithmic and power law dependence of the average charged particle multiplicity (langNchrang) on \\sqrt{s}, and their extrapolation to energies accessible at LHC give very different values. Assuming a reasonably good description of inclusive charged particle multiplicity distributions by negative binomial distribution (NBD) at lower \\sqrt{s} to hold for LHC energies, we observe that the logarithmic \\sqrt{s} dependences of langNchrang are favored by the models at midrapidity. The dNch/dη versus η distributions for the existing measurements are found to be reasonably well described by a function with three parameters which accounts for the basic features of the distribution, height at midrapidity, central rapidity plateau and the higher rapidity fall-off. Extrapolation of these parameters as a function of \\sqrt{s} is used to predict the pseudorapidity distributions of charged particles at LHC energies. dNch/dη calculations from PYTHIA and PHOJET models are found to be lower compared to those obtained from the extrapolated dNch/dη versus η distributions for a broad η range.
Technology Transfer Automated Retrieval System (TEKTRAN)
In this study, six extrapolation methods have been compared for their ability to estimate daily crop evapotranspiration (ETd) from instantaneous latent heat flux estimates derived from digital airborne multispectral remote sensing imagery. Data used in this study were collected during an experiment...
Simulation technique for extrapolation curves in 4πβ-γ coincidence counting method using EGS5 code.
Unno, Y; Sanami, T; Sasaki, S; Hagiwara, M; Yunoki, A
2016-03-01
A simulation technique was developed for the extrapolation technique in 4πβ-γ coincidence counting method. Simultaneous emissions of β and γ rays were calculated using EGS5 code to obtain coincidence counting between both β and γ channels. The simulated extrapolation curves were compared with experimental data obtained with (134)Cs measurements using a plastic scintillator in the β channel. The variation of the extrapolation curves with γ-gate configuration was investigated by the simulation technique. PMID:26688354
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-07-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external
Accurate Complete Basis Set Extrapolation of Direct Random Phase Correlation Energies.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn
2015-08-11
The direct random phase approximation (dRPA) is a promising way to obtain improvements upon the standard semilocal density functional results in many aspects of computational chemistry. In this paper, we address the slow convergence of the calculated dRPA correlation energy with the increase of the quality and size of the popular Gaussian-type Dunning's correlation consistent aug-cc-pVXZ split valence atomic basis set family. The cardinal number X controls the size of the basis set, and we use X = 3-6 in this study. It is known that even the very expensive X = 6 basis sets lead to large errors for the dRPA correlation energy, and thus complete basis set extrapolation is necessary. We study the basis set convergence of the dRPA correlation energies on a set of 65 hydrocarbon isomers from CH4 to C6H6. We calculate the iterative density fitted dRPA correlation energies using an efficient algorithm based on the CC-like form of the equations using the self-consistent HF orbitals. We test the popular inverse cubic, the optimized exponential, and inverse power formulas for complete basis set extrapolation. We have found that the optimized inverse power based extrapolation delivers the best energies. Further analysis showed that the optimal exponent depends on the molecular structure, and the most efficient two-point energy extrapolations that use X = 3 and 4 can be improved considerably by considering the atomic composition and hybridization states of the atoms in the molecules. Our results also show that the optimized exponents that yield accurate X = 3 and 4 extrapolated dRPA energies for atoms or small molecules might be inaccurate for larger molecules. PMID:26574475
André, M; Malmström, M E; Neretnieks, I
2009-11-01
Permanent storage of spent nuclear fuel in crystalline bedrock is investigated in several countries. For this storage scenario, the host rock is the third and final barrier for radionuclide migration. Sorption reactions in the crystalline rock matrix have strong retardative effects on the transport of radionuclides. To assess the barrier properties of the host rock it is important to have sorption data representative of the undisturbed host rock conditions. Sorption data is in the majority of reported cases determined using crushed rock. Crushing has been shown to increase a rock samples sorption capacity by creating additional surfaces. There are several problems with such an extrapolation. In studies where this problem is addressed, simple models relating the specific surface area to the particle size are used to extrapolate experimental data to a value representative of the host rock conditions. In this article, we report and compare surface area data of five size fractions of crushed granite and of 100 mm long drillcores as determined by the Brunauer Emmet Teller (BET)-method using N(2)-gas. Special sample holders that could hold large specimen were developed for the BET measurements. Surface area data on rock samples as large as the drillcore has not previously been published. An analysis of this data show that the extrapolated value for intact rock obtained from measurements on crushed material was larger than the determined specific surface area of the drillcores, in some cases with more than 1000%. Our results show that the use of data from crushed material and current models to extrapolate specific surface areas for host rock conditions can lead to over estimation interpretations of sorption ability. The shortcomings of the extrapolation model are discussed and possible explanations for the deviation from experimental data are proposed. PMID:19781807
Yoshinaga, N.; Arima, A.
2010-04-15
We propose some new, efficient, and practical extrapolation methods to obtain a few low-lying eigenenergies of a large-dimensional Hamiltonian matrix in the nuclear shell model. We obtain those energies at the desired accuracy by extrapolation after diagonalizing small-dimensional submatrices of the sorted Hamiltonian matrix.
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
Agarwal, Amit B; McBride, Ali
2016-08-01
The World Health Organization defines a biosimilar as "a biotherapeutic product which is similar in terms of quality, safety and efficacy to an already licensed reference biotherapeutic product." Biosimilars are biologic medical products that are very distinct from small-molecule generics, as their active substance is a biological agent derived from a living organism. Approval processes are highly regulated, with guidance issued by the European Medicines Agency and US Food and Drug Administration. Approval requires a comparability exercise consisting of extensive analytical and preclinical in vitro and in vivo studies, and confirmatory clinical studies. Extrapolation of biosimilars from their original indication to another is a feasible but highly stringent process reliant on rigorous scientific justification. This review focuses on the processes involved in gaining biosimilar approval and extrapolation and details the comparability exercise undertaken in the European Union between originator erythropoietin-stimulating agent, Eprex(®), and biosimilar, Retacrit™. PMID:27317353
Neutron spectroscopy results of JET high-performance plasmas and extrapolations to DT performance.
Hellesen, C; Andersson Sundén, E; Conroy, S; Ericsson, G; Eriksson, J; Gatu Johnson, M; Weiszflog, M
2010-10-01
In a fusion reactor with high energy gain, the fusion power will be mainly thermonuclear (THN). Measurements of the THN neutron rate are a good performance indicator of a fusion plasma, requiring neutron emission spectroscopy (NES) measurements to distinguish thermal and nonthermal contributions. We report here on recent NES results from JET high-performance plasmas with high fractions (about 65%) of THN emission. The analysis is made with a framework for analyzing NES data, taking into account THN reactions and beam-target reactions. The results are used to extrapolate to the equivalent DT rates. Finally, we discuss the applicability of using NES in the deuterium phase of ITER, both for the extrapolations to ITER’s future DT performance as well as for the measurements of confined energetic ions. PMID:21058461
New method of extrapolation of the resistance of a model planing boat to full size
NASA Technical Reports Server (NTRS)
Sottorf, W
1942-01-01
The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.
Latychevskaia, Tatiana; Fink, Hans-Werner
2015-01-12
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission function of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.
Neutron spectroscopy results of JET high-performance plasmas and extrapolations to DT performance
Hellesen, C.; Andersson Sunden, E.; Conroy, S.; Ericsson, G.; Eriksson, J.; Gatu Johnson, M.; Weiszflog, M.; Collaboration: JET-EFDA Contributors
2010-10-15
In a fusion reactor with high energy gain, the fusion power will be mainly thermonuclear (THN). Measurements of the THN neutron rate are a good performance indicator of a fusion plasma, requiring neutron emission spectroscopy (NES) measurements to distinguish thermal and nonthermal contributions. We report here on recent NES results from JET high-performance plasmas with high fractions (about 65%) of THN emission. The analysis is made with a framework for analyzing NES data, taking into account THN reactions and beam-target reactions. The results are used to extrapolate to the equivalent DT rates. Finally, we discuss the applicability of using NES in the deuterium phase of ITER, both for the extrapolations to ITER's future DT performance as well as for the measurements of confined energetic ions.
Chiral extrapolations in 2+1 flavor domain wall fermion simulations
NASA Astrophysics Data System (ADS)
Lin, Meifeng
2006-12-01
Simulations with 2+1 flavors of domain wall fermions provide us with the opportunity to compare the lattice data directly to the predictions of continuum chiral perturbation theory, up to correc- tions from the residual chiral symmetry breaking, mres , and O(a) lattice artefacts, which are rela- tively small for domain wall fermions. We present preliminary results for the pseudoscalar meson masses and decay constants from partially quenched simulations and examine the next-to-leading order chiral extrapolations at small quark masses. The simulations were carried out on two lattice volumes : 163 × 32 and 243 × 64, with the lattice spacing fixed at about 0.1 fm. The subtleties of the chiral fits are discussed. We also explore the roles of mres and O(a) terms in the NLO chiral expansions and their effects on the chiral extrapolations for the pseudoscalar masses and decay constants.
Applications and limitations of interspecies scaling and in vitro extrapolation in pharmacokinetics.
Lin, J H
1998-12-01
The search for new drugs is an extremely time-consuming and costly endeavor. Much of the time and cost are expended on generating data that support the efficacy and safety profiles of the drug. Because of ethical constraints, relevant pharmacological and toxicological assessments must be made in laboratory animals and in in vitro systems before human testing can begin. In support of the efficacy and safety evaluation during drug development, two fundamental challenges facing industrial drug metabolism scientists are (1) how to "scale-up" the pharmacokinetic data from animals to humans and (2) how to extrapolate the in vitro data to the in vivo situation. This review examines the applications and limitations of interspecies scaling and in vitro extrapolation in pharmacokinetics. PMID:9860929
A Spatial Extrapolation Approach to Assess the Impact of Climate Change on Water Resource Systems
NASA Astrophysics Data System (ADS)
Pina, J.; Tilmant, A.; Anctil, F.
2015-12-01
The typical approach to assess climate change impacts on water resources systems is based on a vertical integration/coupling of models: GCM models are run to project future precipitations and temperatures, which are then downscaled and used as inputs to hydrologic models whose outputs are processed by water systems models. From a decision-making point of view, this top-down vertical approach presents some challenges. For example, since the range of uncertainty that can be explored with GCM is limited, researchers are relying on ensembles to enlarge the spread, making the modeling approach even more demanding in terms of computation time and resource. When a particular water system must be analyzed, the question is to know whether this computationally intensive vertical approach is necessary in the first place or if we could extrapolate projections available in neighboring systems to feed the water system model? This would be equivalent to a horizontal approach. The proposed study addresses this question by comparing the performance of a water resource system under future climate conditions using the vertical and horizontal approaches. The methodology is illustrated with the hydropower system of the Gatineau River Basin in Quebec, Canada. Vertically obtained hydrologic projections available in those river basins are extrapolated and used as inputs to a stochastic multireservoir optimization model. Two different extrapolation techniques are tested. The first one simply relies on the ratios between the drainage areas. The second exploits the covariance structure found in historical flow data throughout the region. The analysis of the simulation results reveals that the annual and weekly energy productions of the system derived from the horizontal approach are statistically equivalent to those obtained with the vertical one, regardless of the extrapolation technique used.
NASA Astrophysics Data System (ADS)
Mazas, Franck; Hamm, Luc; Kergadallan, Xavier
2013-04-01
In France, the storm Xynthia of February 27-28th, 2010 reminded engineers and stakeholders of the necessity for an accurate estimation of extreme sea levels for the risk assessment in coastal areas. Traditionally, two main approaches exist for the statistical extrapolation of extreme sea levels: the direct approach performs a direct extrapolation on the sea level data, while the indirect approach carries out a separate analysis of the deterministic component (astronomical tide) and stochastic component (meteorological residual, or surge). When the tidal component is large compared with the surge one, the latter approach is known to perform better. In this approach, the statistical extrapolation is performed on the surge component then the distribution of extreme seal levels is obtained by convolution of the tide and surge distributions. This model is often referred to as the Joint Probability Method. Different models from the univariate extreme theory have been applied in the past for extrapolating extreme surges, in particular the Annual Maxima Method (AMM) and the r-largest method. In this presentation, we apply the Peaks-Over-Threshold (POT) approach for declustering extreme surge events, coupled with the Poisson-GPD model for fitting extreme surge peaks. This methodology allows a sound estimation of both lower and upper tails of the stochastic distribution, including the estimation of the uncertainties associated to the fit by computing the confidence intervals. After convolution with the tide signal, the model yields the distribution for the whole range of possible sea level values. Particular attention is paid to the necessary distinction between sea level values observed at a regular time step, such as hourly, and sea level events, such as those occurring during a storm. Extremal indexes for both surges and levels are thus introduced. This methodology will be illustrated with a case study at Brest, France.
Latychevskaia, Tatiana Fink, Hans-Werner
2013-11-11
Conventional microscopic records represent intensity distributions whereby local sample information is mapped onto local information at the detector. In coherent microscopy, the superposition principle of waves holds; field amplitudes are added, not intensities. This non-local representation is spread out in space and interference information combined with wave continuity allows extrapolation beyond the actual detected data. Established resolution criteria are thus circumvented and hidden object details can retrospectively be recovered from just a fraction of an interference pattern.
Landis, W.G.
1995-12-31
One of the central problems in environmental toxicology has been the extrapolation from laboratory tests to the field and from biomonitoring results to ecological impacts. The crossing of the boundary from molecular mechanisms to population impacts has always been difficult. Perhaps the problem in extrapolation is not so much the effects of physical scale as much as the transition boundary between two different types of systems, organismal and non-organismal. The basic properties of these systems are quite distinct. Organismal systems possess a central core of information, subject to natural selection, that can impose homeostasis (body temperature) or diversity (immune system) upon the constituents of that system. Unless there are changes in the genetic structure of the germ line, impacts to the somatic cells and structure of the organism are erased upon the establishment of a new generation. The integrity of the germplasm means that organismal systems are largely a historical. In contrast, non-organismal systems contain no central and inheritable repository of information, analogous to the genome, that serves as the blueprint for an ecological system. Non-organismal systems are historical in the terminology of complex systems. The irreversibility and historical nature of ecological systems has also been observed experimentally. Historical events and the derived heterogeneity in the field must be taken into account when the extrapolations are conducted. Genetic structure of the populations, the current spatial distribution of species, and the physical structure of the system must all be taken into account if accurate forecasts from experimental results can be made.
Interspecies extrapolation for mammals and birds: What`s a risk assessor to do?
Norton, S.R.; Rhomberg, L.
1995-12-31
Ecological risk assessors are often faced with the need to extrapolate laboratory data on a particular species to another species of interest. For aquatic organisms, an extensive database is available that has been used to develop empirical intertaxa extrapolation factors. For birds and mammals, however, such a database does not exist. This presentation explores the strengths and limitations of using knowledge of allometric scaling to address issues of interspecies differences in toxic response. In particular, the authors focus on approaches for oral exposure. Most approaches to interspecies extrapolation begin with implicit or explicit assumptions about how to express equivalent doses for different body sizes. It is in these calculations that knowledge of allometric scaling has its greatest applicability. Options include adjusting dose for equivalent organ weights (i.e., approximately BW{sup 1}), equivalent surface area (i.e., approximately BW{sup 2/3}), or equivalent metabolic rate (i.e., approximately BW{sup 3/4}). The authors will discuss the basis for each of these options, precedents for their use, and implications for analyzing ecotoxicological data.
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
NASA Astrophysics Data System (ADS)
Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.
2013-10-01
We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.
A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system
NASA Astrophysics Data System (ADS)
Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson
2016-06-01
Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.
Multi-state extrapolation of UV/Vis absorption spectra with QM/QM hybrid methods
NASA Astrophysics Data System (ADS)
Ren, Sijin; Caricato, Marco
2016-05-01
In this work, we present a simple approach to simulate absorption spectra from hybrid QM/QM calculations. The goal is to obtain reliable spectra for compounds that are too large to be treated efficiently at a high level of theory. The present approach is based on the extrapolation of the entire absorption spectrum obtained by individual subcalculations. Our program locates the main spectral features in each subcalculation, e.g., band peaks and shoulders, and fits them to Gaussian functions. Each Gaussian is then extrapolated with a formula similar to that of ONIOM (Our own N-layered Integrated molecular Orbital molecular Mechanics). However, information about individual excitations is not necessary so that difficult state-matching across subcalculations is avoided. This multi-state extrapolation thus requires relatively low implementation effort while affording maximum flexibility in the choice of methods to be combined in the hybrid approach. The test calculations show the efficacy and robustness of this methodology in reproducing the spectrum computed for the entire molecule at a high level of theory.
Multi-state extrapolation of UV/Vis absorption spectra with QM/QM hybrid methods.
Ren, Sijin; Caricato, Marco
2016-05-14
In this work, we present a simple approach to simulate absorption spectra from hybrid QM/QM calculations. The goal is to obtain reliable spectra for compounds that are too large to be treated efficiently at a high level of theory. The present approach is based on the extrapolation of the entire absorption spectrum obtained by individual subcalculations. Our program locates the main spectral features in each subcalculation, e.g., band peaks and shoulders, and fits them to Gaussian functions. Each Gaussian is then extrapolated with a formula similar to that of ONIOM (Our own N-layered Integrated molecular Orbital molecular Mechanics). However, information about individual excitations is not necessary so that difficult state-matching across subcalculations is avoided. This multi-state extrapolation thus requires relatively low implementation effort while affording maximum flexibility in the choice of methods to be combined in the hybrid approach. The test calculations show the efficacy and robustness of this methodology in reproducing the spectrum computed for the entire molecule at a high level of theory. PMID:27179466
Testing magnetofrictional extrapolation with the Titov-Démoulin model of solar active regions
NASA Astrophysics Data System (ADS)
Valori, G.; Kliem, B.; Török, T.; Titov, V. S.
2010-09-01
We examine the nonlinear magnetofrictional extrapolation scheme using the solar active region model by Titov and Démoulin as test field. This model consists of an arched, line-tied current channel held in force-free equilibrium by the potential field of a bipolar flux distribution in the bottom boundary. A modified version with a parabolic current density profile is employed here. We find that the equilibrium is reconstructed with very high accuracy in a representative range of parameter space, using only the vector field in the bottom boundary as input. Structural features formed in the interface between the flux rope and the surrounding arcade - “hyperbolic flux tube” and “bald patch separatrix surface” - are reliably reproduced, as are the flux rope twist and the energy and helicity of the configuration. This demonstrates that force-free fields containing these basic structural elements of solar active regions can be obtained by extrapolation. The influence of the chosen initial condition on the accuracy of reconstruction is also addressed, confirming that the initial field that best matches the external potential field of the model quite naturally leads to the best reconstruction. Extrapolating the magnetogram of a Titov-Démoulin equilibrium in the unstable range of parameter space yields a sequence of two opposing evolutionary phases, which clearly indicate the unstable nature of the configuration: a partial buildup of the flux rope with rising free energy is followed by destruction of the rope, losing most of the free energy.
NASA Astrophysics Data System (ADS)
Zhang, C.; Chuckpaiwong, I.; Liang, S. Y.; Seth, B. B.
2002-07-01
Life testing under nominal operating conditions of mechanical parts with high mean lifetime between failure (MTBF) often consumes a significant amount of time and resources, rendering such procedures expensive and impractical. As a result, the technology of accelerated life testing (ALT) has been developed for testing at high stress levels (e.g. temperature, voltage, pressure, corrosive media, load, vibration amplitude, etc.) so that it can be extrapolated—through a physically reasonable statistical model—to obtain estimations of life at lower, normal stress levels or even limit stress levels. However, the issue of prediction accuracy associated with extrapolating data outside the range of testing, or even to a singularity level (no stress), has not yet been fully addressed. In this research, an accelerator factor is introduced into an inverse power law model to estimate the life distribution in terms of time and stresses. Also, a generalized Eyring model is set up for singularity extrapolation in handling limit stress level conditions. The procedure to calibrate the associated shape factors based on the maximum likelihood principle is also formulated. The methodology implementation, based on a one-main-step, multiple-step-stress test scheme, is experimentally illustrated with tapered roller bearing under the stress of environmental corrosion as a case study. The experimental results show that the developed accelerated life test model can effectively evaluate the life probability of a bearing based on accelerated testing data when extrapolating to the stress levels within or outside the range of testing.
NASA Astrophysics Data System (ADS)
Steinhausen, Heinz C.; Martín, Rodrigo; den Brok, Dennis; Hullin, Matthias B.; Klein, Reinhard
2015-03-01
Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.
Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.
2011-03-01
In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.
IMPEDANCE OF FINITE LENGTH RESISTOR
KRINSKY, S.; PODOBEDOV, B.; GLUCKSTERN, R.L.
2005-05-15
We determine the impedance of a cylindrical metal tube (resistor) of radius a, length g, and conductivity {sigma}, attached at each end to perfect conductors of semi-infinite length. Our main interest is in the asymptotic behavior of the impedance at high frequency, k >> 1/a. In the equilibrium regime, , the impedance per unit length is accurately described by the well-known result for an infinite length tube with conductivity {sigma}. In the transient regime, ka{sup 2} >> g, we derive analytic expressions for the impedance and wakefield.
López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio
2010-04-01
The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. PMID:19897358
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
Motion-based prediction explains the role of tracking in motion extrapolation.
Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U
2013-11-01
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated
Line Lengths and Starch Scores.
ERIC Educational Resources Information Center
Moriarty, Sandra E.
1986-01-01
Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)
Latychevskaia, Tatiana Fink, Hans-Werner; Chushkin, Yuriy; Zontone, Federico
2015-11-02
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
Spreading lengths of Hermite polynomials
NASA Astrophysics Data System (ADS)
Sánchez-Moreno, P.; Dehesa, J. S.; Manzano, D.; Yáñez, R. J.
2010-03-01
The Renyi, Shannon and Fisher spreading lengths of the classical or hypergeometric orthogonal polynomials, which are quantifiers of their distribution all over the orthogonality interval, are defined and investigated. These information-theoretic measures of the associated Rakhmanov probability density, which are direct measures of the polynomial spreading in the sense of having the same units as the variable, share interesting properties: invariance under translations and reflections, linear scaling and vanishing in the limit that the variable tends towards a given definite value. The expressions of the Renyi and Fisher lengths for the Hermite polynomials are computed in terms of the polynomial degree. The combinatorial multivariable Bell polynomials, which are shown to characterize the finite power of an arbitrary polynomial, play a relevant role for the computation of these information-theoretic lengths. Indeed these polynomials allow us to design an error-free computing approach for the entropic moments (weighted Lq-norms) of Hermite polynomials and subsequently for the Renyi and Tsallis entropies, as well as for the Renyi spreading lengths. Sharp bounds for the Shannon length of these polynomials are also given by means of an information-theoretic-based optimization procedure. Moreover, the existence of a linear correlation between the Shannon length (as well as the second-order Renyi length) and the standard deviation is computationally proved. Finally, the application to the most popular quantum-mechanical prototype system, the harmonic oscillator, is discussed and some relevant asymptotical open issues related to the entropic moments, mentioned previously, are posed.
When does length cause the word length effect?
Jalbert, Annie; Neath, Ian; Bireta, Tamra J; Surprenant, Aimée M
2011-03-01
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining evidence for time-based decay. However, previous studies investigating this effect have confounded length with orthographic neighborhood size. In the present study, Experiments 1A and 1B revealed typical effects of length when short and long words were equated on all relevant dimensions previously identified in the literature except for neighborhood size. In Experiment 2, consonant-vowel-consonant (CVC) words with a large orthographic neighborhood were better recalled than were CVC words with a small orthographic neighborhood. In Experiments 3 and 4, using two different sets of stimuli, we showed that when short (1-syllable) and long (3-syllable) items were equated for neighborhood size, the word length effect disappeared. Experiment 5 replicated this with spoken recall. We suggest that the word length effect may be better explained by the differences in linguistic and lexical properties of short and long words rather than by length per se. These results add to the growing literature showing problems for theories of memory that include decay offset by rehearsal as a central feature. PMID:21171805
Length dependence of thermal conductivity by approach-to-equilibrium molecular dynamics
NASA Astrophysics Data System (ADS)
Zaoui, Hayat; Palla, Pier Luca; Cleri, Fabrizio; Lampin, Evelyne
2016-08-01
The length dependence of thermal conductivity over more than two orders of magnitude has been systematically studied for a range of materials, interatomic potentials, and temperatures using the atomistic approach-to-equilibrium molecular dynamics (AEMD) method. By comparing the values of conductivity obtained for a given supercell length and maximum phonon mean free path (MFP), we find that such values are strongly correlated, demonstrating that the AEMD calculation with a supercell of finite length actually probes the thermal conductivity corresponding to a maximum phonon MFP. As a consequence, the less pronounced length dependence usually observed for poorer thermal conductors, such as amorphous silica, is physically justified by their shorter average phonon MFP. Finally, we compare different analytical extrapolations of the conductivity to infinite length and demonstrate that the frequently used Matthiessen rule is not applicable in AEMD. An alternative extrapolation more suitable for transient-time, finite-supercell simulations is derived. This approximation scheme can also be used to classify the quality of different interatomic potential models with respect to their capability of predicting the experimental thermal conductivity.
Beresford, Nicholas A; Wood, Michael D; Vives i Batlle, Jordi; Yankovich, Tamara L; Bradshaw, Clare; Willey, Neil
2016-01-01
We will never have data to populate all of the potential radioecological modelling parameters required for wildlife assessments. Therefore, we need robust extrapolation approaches which allow us to make best use of our available knowledge. This paper reviews and, in some cases, develops, tests and validates some of the suggested extrapolation approaches. The concentration ratio (CRproduct-diet or CRwo-diet) is shown to be a generic (trans-species) parameter which should enable the more abundant data for farm animals to be applied to wild species. An allometric model for predicting the biological half-life of radionuclides in vertebrates is further tested and generally shown to perform acceptably. However, to fully exploit allometry we need to understand why some elements do not scale to expected values. For aquatic ecosystems, the relationship between log10(a) (a parameter from the allometric relationship for the organism-water concentration ratio) and log(Kd) presents a potential opportunity to estimate concentration ratios using Kd values. An alternative approach to the CRwo-media model proposed for estimating the transfer of radionuclides to freshwater fish is used to satisfactorily predict activity concentrations in fish of different species from three lakes. We recommend that this approach (REML modelling) be further investigated and developed for other radionuclides and across a wider range of organisms and ecosystems. Ecological stoichiometry shows potential as an extrapolation method in radioecology, either from one element to another or from one species to another. Although some of the approaches considered require further development and testing, we demonstrate the potential to significantly improve predictions of radionuclide transfer to wildlife by making better use of available data. PMID:25850783
Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane
La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka
2014-01-01
It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874
Quantification of sarcomere length distribution in whole muscle frozen sections.
O'Connor, Shawn M; Cheng, Elton J; Young, Kevin W; Ward, Samuel R; Lieber, Richard L
2016-05-15
Laser diffraction (LD) is a valuable tool for measuring sarcomere length (Ls), a major determinant of muscle function. However, this method relies on few measurements per sample that are often extrapolated to whole muscle properties. Currently it is not possible to measure Ls throughout an entire muscle and determine how Ls varies at this scale. To address this issue, we developed an actuated LD scanner for sampling large numbers of sarcomeres in thick whole muscle longitudinal sections. Sections of high optical quality and fixation were produced from tibialis anterior and extensor digitorum longus muscles of Sprague-Dawley rats (N=6). Scans produced two-dimensional Ls maps, capturing >85% of the muscle area per section. Individual Ls measures generated by automatic LD and bright-field microscopy showed excellent agreement over a large Ls range (ICC>0.93). Two-dimensional maps also revealed prominent regional Ls variations across muscles. PMID:26994184
Scotcher, Daniel; Jones, Christopher; Posada, Maria; Galetin, Aleksandra; Rostami-Hodjegan, Amin
2016-09-01
It is envisaged that application of mechanistic models will improve prediction of changes in renal disposition due to drug-drug interactions, genetic polymorphism in enzymes and transporters and/or renal impairment. However, developing and validating mechanistic kidney models is challenging due to the number of processes that may occur (filtration, secretion, reabsorption and metabolism) in this complex organ. Prediction of human renal drug disposition from preclinical species may be hampered by species differences in the expression and activity of drug metabolising enzymes and transporters. A proposed solution is bottom-up prediction of pharmacokinetic parameters based on in vitro-in vivo extrapolation (IVIVE), mediated by recent advances in in vitro experimental techniques and application of relevant scaling factors. This review is a follow-up to the Part I of the report from the 2015 AAPS Annual Meeting and Exhibition (Orlando, FL; 25th-29th October 2015) which focuses on IVIVE and mechanistic prediction of renal drug disposition. It describes the various mechanistic kidney models that may be used to investigate renal drug disposition. Particular attention is given to efforts that have attempted to incorporate elements of IVIVE. In addition, the use of mechanistic models in prediction of renal drug-drug interactions and potential for application in determining suitable adjustment of dose in kidney disease are discussed. The need for suitable clinical pharmacokinetics data for the purposes of delineating mechanistic aspects of kidney models in various scenarios is highlighted. PMID:27506526
On the distance to Cygnus X-1. [extrapolation from nearby stars
NASA Technical Reports Server (NTRS)
Margon, B.; Bowyer, S.; Stone, R. P. S.
1973-01-01
Interstellar extinction of 50 stars in the immediate vicinity of Cyg X-1 is compared with color excess of HDE 226868. The fact that HDE 226868 has extinction drastically exceeding that of all stars in the field at less than 1 kpc is believed to be in conflict with the model of Trimble et al. (1973) in which the primary is a luminous undermassive star and with other similar models. Uniform extrapolation of the reddening yields a distance estimate of (2.5 plus or minus 0.4) kpc for Cygnus X-1, in agreement with the spectroscopic modulus for an O9 star.
RF-sheath heat flux estimates on Tore Supra and JET ICRF antennae. Extrapolation to ITER
Colas, L.; Portafaix, C.; Goniche, M.; Jacquet, Ph.
2009-11-26
RF-sheath induced heat loads are identified from infrared thermography measurements on Tore Supra ITER-like prototype and JET A2 antennae, and are quantified by fitting thermal calculations. Using a simple scaling law assessed experimentally, the estimated heat fluxes are then extrapolated to the ITER ICRF launcher delivering 20 MW RF power for several plasma scenarios. Parallel heat fluxes up to 6.7 MW/m{sup 2} are expected very locally on ITER antenna front face. The role of edge density on operation is stressed as a trade-off between easy RF coupling and reasonable heat loads. Sources of uncertainty on the results are identified.
Temperature-extrapolation method for Implicit Monte Carlo - Radiation hydrodynamics calculations
McClarren, R. G.; Urbatsch, T. J.
2013-07-01
We present a method for implementing temperature extrapolation in Implicit Monte Carlo solutions to radiation hydrodynamics problems. The method is based on a BDF-2 type integration to estimate a change in material temperature over a time step. We present results for radiation only problems in an infinite medium and for a 2-D Cartesian hohlraum problem. Additionally, radiation hydrodynamics simulations are presented for an RZ hohlraum problem and a related 3D problem. Our results indicate that improvements in noise and general behavior are possible. We present considerations for future investigations and implementations. (authors)
Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment
Smith, Jordan N.
2013-11-01
The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has
NASA Astrophysics Data System (ADS)
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation
CHARACTERISTICS OF THE H-MODE PEDESTAL AND EXTRAPOLATION TO ITER
OSBORNE,TH; CORDEY,JG; GROEBNER,RJ; HATAE,T; HUBBARD,A; HORTON,LD; KAMADA,Y; KRITZ,A; LAO,LL; LEONARD,AW; LOARTE,A; MAHDAVI,MA; MOSSESSIAN,D; ONJUN,T; OSSENPENKO,M; ROGNLIEN,TD; SAIBNE,G; SNYDER,PB; SUGIHARA,M; SHURYGIN,R; THOMSEN,K; WADE,MR; WILSON,HR; XU,XQ; YATSU,K
2002-11-01
A271 CHARACTERISTICS OF THE H-MODE PEDESTAL AND EXTRAPOLATION TO ITER. The peeling-ballooning mode model for edge stability along with a model for the H-mode transport barrier width is used as an approach to estimating the H-mode pedestal conditions in ITER. Scalings of the barrier width based on ion-orbit loss, neutral penetration, and turbulence suppression are examined and empirical scalings of the barrier width are presented. An empirical scaling for the pedestal {beta} is derived based on ideas from stability and the empirical width scaling. The impact of the stability model and other factors on ELM size is discussed.
MEGA16 - Computer program for analysis and extrapolation of stress-rupture data
NASA Technical Reports Server (NTRS)
Ensign, C. R.
1981-01-01
The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
Model of a realistic InP surface quantum dot extrapolated from atomic force microscopy results.
Barettin, Daniele; De Angelis, Roberta; Prosposito, Paolo; Auf der Maur, Matthias; Casalboni, Mauro; Pecchia, Alessandro
2014-05-16
We report on numerical simulations of a zincblende InP surface quantum dot (QD) on In₀.₄₈Ga₀.₅₂ buffer. Our model is strictly based on experimental structures, since we extrapolated a three-dimensional dot directly by atomic force microscopy results. Continuum electromechanical, [Formula: see text] bandstructure and optical calculations are presented for this realistic structure, together with benchmark calculations for a lens-shape QD with the same radius and height of the extrapolated dot. Interesting similarities and differences are shown by comparing the results obtained with the two different structures, leading to the conclusion that the use of a more realistic structure can provide significant improvements in the modeling of QDs fact, the remarkable splitting for the electron p-like levels of the extrapolated dot seems to prove that a realistic experimental structure can reproduce the right symmetry and a correct splitting usually given by atomistic calculations even within the multiband [Formula: see text] approach. Moreover, the energy levels and the symmetry of the holes are strongly dependent on the shape of the dot. In particular, as far as we know, their wave function symmetries do not seem to resemble to any results previously obtained with simulations of zincblende ideal structures, such as lenses or truncated pyramids. The magnitude of the oscillator strengths is also strongly dependent on the shape of the dot, showing a lower intensity for the extrapolated dot, especially for the transition between the electrons and holes ground state, as a result of a relevant reduction of the wave functions overlap. We also compare an experimental photoluminescence spectrum measured on an homogeneous sample containing about 60 dots with a numerical ensemble average derived from single dot calculations. The broader energy range of the numerical spectrum motivated us to perform further verifications, which have clarified some aspects of the experimental
Model of a realistic InP surface quantum dot extrapolated from atomic force microscopy results
NASA Astrophysics Data System (ADS)
Barettin, Daniele; De Angelis, Roberta; Prosposito, Paolo; Auf der Maur, Matthias; Casalboni, Mauro; Pecchia, Alessandro
2014-05-01
We report on numerical simulations of a zincblende InP surface quantum dot (QD) on \\text{I}{{\\text{n}}_{0.48}}\\text{G}{{\\text{a}}_{0.52}}\\text{P} buffer. Our model is strictly based on experimental structures, since we extrapolated a three-dimensional dot directly by atomic force microscopy results. Continuum electromechanical, \\vec{k}\\;\\cdot \\;\\vec{p} bandstructure and optical calculations are presented for this realistic structure, together with benchmark calculations for a lens-shape QD with the same radius and height of the extrapolated dot. Interesting similarities and differences are shown by comparing the results obtained with the two different structures, leading to the conclusion that the use of a more realistic structure can provide significant improvements in the modeling of QDs fact, the remarkable splitting for the electron p-like levels of the extrapolated dot seems to prove that a realistic experimental structure can reproduce the right symmetry and a correct splitting usually given by atomistic calculations even within the multiband \\vec{k}\\;\\cdot \\;\\vec{p} approach. Moreover, the energy levels and the symmetry of the holes are strongly dependent on the shape of the dot. In particular, as far as we know, their wave function symmetries do not seem to resemble to any results previously obtained with simulations of zincblende ideal structures, such as lenses or truncated pyramids. The magnitude of the oscillator strengths is also strongly dependent on the shape of the dot, showing a lower intensity for the extrapolated dot, especially for the transition between the electrons and holes ground state, as a result of a relevant reduction of the wave functions overlap. We also compare an experimental photoluminescence spectrum measured on an homogeneous sample containing about 60 dots with a numerical ensemble average derived from single dot calculations. The broader energy range of the numerical spectrum motivated us to perform further
Wouters, Sebastian; Limacher, Peter A; Van Neck, Dimitri; Ayers, Paul W
2012-04-01
We have implemented the sweep algorithm for the variational optimization of SU(2) U(1) (spin and particle number) invariant matrix product states (MPS) for general spin and particle number invariant fermionic Hamiltonians. This class includes non-relativistic quantum chemical systems within the Born-Oppenheimer approximation. High-accuracy ab initio finite field results of the longitudinal static polarizabilities and second hyperpolarizabilities of one-dimensional hydrogen chains are presented. This allows to assess the performance of other quantum chemical methods. For small basis sets, MPS calculations in the saturation regime of the optical response properties can be performed. These results are extrapolated to the thermodynamic limit. PMID:22482543
Convergence and stability properties of minimal polynomial and reduced rank extrapolation algorithms
NASA Technical Reports Server (NTRS)
Sidi, A.
1983-01-01
The minimal polynomial and reduced rank extrapolation algorithms are two acceleration of convergence methods for sequences of vectors. In a recent survey these methods were tested and compared with the scalar, vector, topological epsilon algorithms, and were observed to be more efficient than the latter. It was also observed that the two methods have similar convergence properties. The convergence and stability properties of these methods are analyzed and the performance of the acceleration methods when applied to a class of vector sequences that includes those sequences obtained from systems of linear equations by using matrix iterative methods is discussed.
Sommerfeld, Thomas; Ehara, Masahiro
2015-01-21
The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires—at least in principle—that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the {sup 2}Π{sub u} resonance of CO{sub 2}{sup −}, and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO{sub 2}{sup −}. It is important to emphasize that for both the model and for CO{sub 2}{sup −}, all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.
Mueller, David S.
2013-01-01
proﬁles from the entire cross section and multiple transects to determine a mean proﬁle for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by ﬁeld hydrographers has demonstrated that extrap is a more accurate and efﬁcient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Salcido, Robert; Rubin, Emily
2016-06-01
Auditors in Medicare overpayment or False Claims Act (FCA) cases often use statistical extrapolation to estimate a health-care provider's total liability from a small sample of audited claims. Courts treat statistical extrapolation differently depending on the context. They generally afford the government substantial discretion in using statistical extrapolation in overpayment cases. By contrast, courts typically more closely scrutinize the use of extrapolation in FCA cases involving multiple damages and civil penalties to ensure that the sample truly reflects the entire universe of claims and that the extrapolation rests on a sound methodological foundation. In recent cases, however, multiple courts have allowed the use of extrapolation in FCA cases. When auditors attempt to use statistical extrapolation, providers should closely inspect the sample and challenge the extrapolation when any reasonable argument exists that the sample does not constitute a reliable or accurate representation of all the provider's claims. PMID:26896700
Spackman, Peter R.; Karton, Amir
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
An analytic formula for the extrapolated range of electrons in condensed materials
NASA Astrophysics Data System (ADS)
Tabata, Tatsuo; Andreo, Pedro; Shinoda, Kunihiko
1996-12-01
A single analytic formula for the extrapolated range rex of electrons in condensed materials of atomic numbers from 4 to 92 is given. It has the form of the product of the continuous-slowing-down approximation (CSDA) range r0 and a factor fd related to multiple scattering detours. The factor fd is expressed as a function of incident electron energy T0 and atomic number Z of medium. Values of adjustable parameters in fd have been optimized for data on the ratio {r ex}/{r 0}, in which the Monte Carlo evaluated values of Tabata et al. [Nucl. Instr. Meth. B 95 (1995) 289] (from 0.1 to 100 MeV) and experimental data collected from literature (from 1 keV to 0.1 MeV) for rex have been used together with NIST-database values of r0. For r0 in the extrapolated-range formula, accurate database values or an approximate analytic expression developed as a function of T0, Z, atomic weight A and mean excitation energy I of medium can be used. The maximum deviation of the resultant formula from the Monte Carlo data is about 2% for either option of r0. The determination of the expression for fd at energies below 0.1 MeV is tentative. By using an effective atomic number and atomic weight, the formula can also be applied to light compounds and mixtures.
Comparison of precipitation nowcasting by extrapolation and statistical-advection methods
NASA Astrophysics Data System (ADS)
Sokol, Zbynek; Kitzmiller, David; Pesice, Petr; Mejsnar, Jan
2013-04-01
Two models for nowcasting of 1-h, 2-h and 3-h precipitation in the warm part of the year were evaluated. The first model was based on the extrapolation of observed radar reflectivity (COTREC-IPA) and the second one combined the extrapolation with the application of a statistical model (SAMR). The accuracy of the model forecasts was evaluated on independent data using the standard measures of root-mean-squared-error, absolute error, bias and correlation coefficient as well as by spatial verification methods Fractions Skill Score and SAL technique. The results show that SAMR yields slightly better forecasts during the afternoon period. On the other hand very small or no improvement is realized at night and in the very early morning. COTREC-IPA and SAMR forecast a very similar horizontal structure of precipitation patterns but the model forecasts differ in values. SAMR, similarly as COTREC-IPA, is not able to develop new storms or significantly intensify already existing storms. This is caused by a large uncertainty regarding future development. On the other hand, the SAMR model can reliably predict decreases in precipitation intensity.
Image extrapolation for photo stitching using nonlocal patch-based inpainting
NASA Astrophysics Data System (ADS)
Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Semenischev, E. A.; Agaian, S.; Egiazarian, K.
2014-05-01
Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of overlap for seamless stitching. In many cases such images have different size and shape. So we need to crop panoramas or to use image extrapolation for them. This paper focuses on novel image inpainting method based on modified exemplar-based technique. The basic idea is to find an example (patch) from an image using local binary patterns, and replacing non-existed (`lost') data with it. We propose to use multiple criteria for a patch similarity search since often in practice existed exemplar-based methods produce unsatisfactory results. The criteria for searching the best matching uses several terms, including Euclidean metric for pixel brightness and Chi-squared histogram matching distance for local binary patterns. A combined use of textural geometric characteristics together with color information allows to get more informative description of the patches. In particular, we show how to apply this strategy for image extrapolation for photo stitching. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.
Conn, Paul B; Johnson, Devin S; Boveng, Peter L
2015-01-01
Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH), developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH) can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models). PMID:26496358
Downscaling and extrapolating dynamic seasonal marine forecasts for coastal ocean users
NASA Astrophysics Data System (ADS)
Vanhatalo, Jarno; Hobday, Alistair J.; Little, L. Richard; Spillman, Claire M.
2016-04-01
Marine weather and climate forecasts are essential in planning strategies and activities on a range of temporal and spatial scales. However, seasonal dynamical forecast models, that provide forecasts in monthly scale, often have low offshore resolution and limited information for inshore coastal areas. Hence, there is increasing demand for methods capable of fine scale seasonal forecasts covering coastal waters. Here, we have developed a method to combine observational data with dynamical forecasts from POAMA (Predictive Ocean Atmosphere Model for Australia; Australian Bureau of Meteorology) in order to produce seasonal downscaled, corrected forecasts, extrapolated to include inshore regions that POAMA does not cover. We demonstrate the method in forecasting the monthly sea surface temperature anomalies in the Great Australian Bight (GAB) region. The resolution of POAMA in the GAB is approximately 2° × 1° (lon. × lat.) and the resolution of our downscaled forecast is approximately 1° × 0.25°. We use data and model hindcasts for the period 1994-2010 for forecast validation. The predictive performance of our statistical downscaling model improves on the original POAMA forecast. Additionally, this statistical downscaling model extrapolates forecasts to coastal regions not covered by POAMA and its forecasts are probabilistic which allows straightforward assessment of uncertainty in downscaling and prediction. A range of marine users will benefit from access to downscaled and nearshore forecasts at seasonal timescales.
Molecular Dynamics/Order Parameter eXtrapolation (MD/OPX) for Bionanosystem Simulations
Miao, Yinglong; Ortoleva, Peter J.
2012-01-01
A multiscale approach, Molecular Dynamics/Order Parameter eXtrapolation (MD/OPX), to the all-atom simulation of large bionanosystems is presented. The approach starts with the introduction of a set of order parameters (OPs) automatically generated with orthogonal polynomials to characterize the nanoscale features of bionanosystems. The OPs are shown to evolve slowly via Newton’s equations and the all-atom multiscale analysis (AMA) developed earlier1 demonstrates the existence of their stochastic dynamics, which serve as the justification for our MD/OPX approach. In MD/OPX, a short MD run estimates the rate of change of the OPs, which is then used to extrapolate the state of the system over time that is much longer than the 10−14 second timescale of fast atomic vibrations and collisions. The approach is implemented in NAMD and demonstrated on cowpea chlorotic mottle virus (CCMV) capsid structural transitions (STs). It greatly accelerates the MD code and its underlying all-atom description of the nanosystems enables the use of a universal inter-atomic force field, avoiding recalibration with each new application as needed for coarse-grained models. PMID:18636559
Risk extrapolation for chlorinated methanes as promoters vs initiators of multistage carcinogenesis
Bogen, K.T. )
1990-01-01
Cell-kinetic multistage (CKM) models account for clonal growth of intermediate, premalignant cell populations and thus distinguish somatic mutations and cell proliferation as separate processes that may influence observed rates of tumor formation. This paper illustrates the application of two versions of a two-stage CKM model for extrapolating cancer risk potentially associated with exposure to carbon tetrachloride, chloroform, and dichloromethane, three suspect human carcinogens commonly present in trace amounts in groundwater supplies used for domestic consumption. For each compound, the models were used to calculate a daily oral virtually safe dose' (VSD) to humans associated with a cancer risk of 10{sup {minus}6}, extrapolated from bioassay data on increased hepatocellular tumor incidence in B6C3F1 mice. Exposure-induced bioassay tumor responses were assumed first to be due solely to promotion' in accordance with the majority of available data on in vivo genotoxicity for these compounds. Available data were used to model dose response for induced hepatocellular proliferation in mice for each compound. Physiologically based pharmacokinetic models were used to predict the hepatotoxic effective dose as a function of parent compound administered dose in mice and in humans. Key issues and uncertainties in applying CKM models to risk assessment for cancer promoters are discussed.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems. PMID:25871245
Persistence Length of Stable Microtubules
NASA Astrophysics Data System (ADS)
Hawkins, Taviare; Mirigian, Matthew; Yasar, M. Selcuk; Ross, Jennifer
2011-03-01
Microtubules are a vital component of the cytoskeleton. As the most rigid of the cytoskeleton filaments, they give shape and support to the cell. They are also essential for intracellular traffic by providing the roadways onto which organelles are transported, and they are required to reorganize during cellular division. To perform its function in the cell, the microtubule must be rigid yet dynamic. We are interested in how the mechanical properties of stable microtubules change over time. Some ``stable'' microtubules of the cell are recycled after days, such as in the axons of neurons or the cilia and flagella. We measured the persistence length of freely fluctuating taxol-stabilized microtubules over the span of a week and analyzed them via Fourier decomposition. As measured on a daily basis, the persistence length is independent of the contour length. Although measured over the span of the week, the accuracy of the measurement and the persistence length varies. We also studied how fluorescently-labeling the microtubule affects the persistence length and observed that a higher labeling ratio corresponded to greater flexibility. National Science Foundation Grant No: 0928540 to JLR.
Does length or neighborhood size cause the word length effect?
Jalbert, Annie; Neath, Ian; Surprenant, Aimée M
2011-10-01
Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory. PMID:21461875
When Does Length Cause the Word Length Effect?
ERIC Educational Resources Information Center
Jalbert, Annie; Neath, Ian; Bireta, Tamra J.; Surprenant, Aimee M.
2011-01-01
The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining…
Continuously variable focal length lens
Adams, Bernhard W; Chollet, Matthieu C
2013-12-17
A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.
Effective Cavity Length of Gyrotrons
NASA Astrophysics Data System (ADS)
Thumm, Manfred
2014-12-01
Megawatt-class gyrotron oscillators for electron cyclotron heating and non-inductive current drive (ECH&CD) in magnetically confined thermonuclear fusion plasmas have relatively low cavity quality factors in the range of 1000 to 2000. The effective length of their cavities cannot be simply deduced from the cavity electric field profile, since this has by far not a Gaussian shape. The present paper presents a novel method to estimate the effective length of a gyrotron cavity just from the eigenvalue of the operating TEm,n mode, the cavity radius and the exact oscillation frequency which may be numerically computed or precisely measured. This effective cavity length then can be taken to calculate the Fresnel parameter in order to confirm that the cavity is not too short so that the transverse structure of any mode in the cavity is the same as that of the corresponding mode in a long circular waveguide with the same diameter.
Graduated compression stockings: knee length or thigh length.
Benkö, T; Cooke, E A; McNally, M A; Mollan, R A
2001-02-01
The mechanisms by which graduated compression stockings prevent deep venous thrombosis are not completely understood. In the current study the physiologic effect of low-pressure graduated compression stockings on the venous blood flow in the lower limb and the practical aspects of their use were assessed. Patients having elective orthopaedic surgery at a university orthopaedic department were randomized into five groups to wear two different types of graduated compression stockings in thigh and knee lengths. Patients in the fifth control group did not wear graduated compression stockings. Venous occlusion strain gauge plethysmography was used to measure venous flow. After 20-minutes bed rest there was a highly significant increase in venous capacitance and venous outflow in patients in all of the four groups wearing stockings. There was no difference in the mean of the percentage change of venous capacitance in patients in the four groups wearing stockings. The knee length Brevet stockings were less efficient in increasing the venous outflow. There was no significant change in the venous capacitance and venous outflow in patients in the control group. Visual assessment of the fit and use of stockings was done, and patients' subjective opinion of comfort was sought. The knee length graduated compression stockings wrinkled significantly less, and significantly fewer patients reported discomfort with them. All stockings were reported to be difficult to use. Thigh and knee length stockings have a significant effect on decreasing venous stasis of the lower limb. Knee length graduated compression stockings are similarly efficient in decreasing venous stasis, but they are more comfortable to wear, and they wrinkle less. PMID:11210954
Coherence length of neutron superfluids
De Blasio, F.V.; Hjorth-Jensen, M.; Lazzari, G.; Baldo, M.; Schulze, H.
1997-10-01
The coherence length of superfluid neutron matter is calculated from the microscopic BCS wave function of a Cooper pair in momentum space making use of recent nucleon-nucleon potential models and including polarization (RPA) effects. We find as our main result that the coherence length is proportional to the Fermi momentum to pairing gap ratio, in good agreement with simple estimates used in the literature, with a nearly interaction independent constant of proportionality. Our calculations can be applied to the problem of inhomogeneous superfluidity of hadronic matter in the crust of a neutron star. {copyright} {ital 1997} {ital The American Physical Society}
Overview of bunch length measurements.
Lumpkin, A. H.
1999-02-19
An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed.
CT image construction of a totally deflated lung using deformable model extrapolation
Sadeghi Naini, Ali; Pierce, Greg; Lee, Ting-Yim; and others
2011-02-15
Purpose: A novel technique is proposed to construct CT image of a totally deflated lung from a free-breathing 4D-CT image sequence acquired preoperatively. Such a constructed CT image is very useful in performing tumor ablative procedures such as lung brachytherapy. Tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders preoperative images ineffective for targeting the tumor. Furthermore, the problem cannot be solved using intraoperative ultrasound (U.S.) images because U.S. images are very sensitive to small residual amount of air remaining in the deflated lung. One possible solution to address these issues is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intraoperative U.S. images. However, given that such preoperative images correspond to an inflated lung, such CT images need to be processed to construct CT images pertaining to the lung's deflated state. Methods: To obtain the CT images of deflated lung, we present a novel image construction technique using extrapolated deformable registration to predict the deformation the lung undergoes during full deflation. The proposed construction technique involves estimating the lung's air volume in each preoperative image automatically in order to track the respiration phase of each 4D-CT image throughout a respiratory cycle; i.e., the technique does not need any external marker to form a respiratory signal in the process of curve fitting and extrapolation. The extrapolated deformation field is then applied on a preoperative reference image in order to construct the totally deflated lung's CT image. The technique was evaluated experimentally using ex vivo porcine lung. Results: The ex vivo lung experiments led to very encouraging results. In comparison with the CT image of the deflated lung we acquired for the purpose of validation, the constructed CT image was very similar. The
Influence of experimental scatter on the analysis and extrapolation of stress-rupture data
Booker, M.K.
1986-01-01
A great deal of research has been performed in the area of analysis and extrapolation of stress rupture data for more than three decades. Most of this work has concentrated on development of clever model forms for describing behavior in a simple mathematical way. However, little direct consideration has been given to the importance of inherent data scatter for such analyses. This paper presents a direct discussion of that topic, including an illustration of the impact of data scatter for several computer simulated data sets for which ''true'' behavior and the ''true'' amount of scatter are known by definition. The results clearly indicate that less attention should be paid to development of clever model forms for fitting data, and more should be paid to the use of sound, valid statistical techniques for the fitting of whatever models are used.
The use of extrapolation concepts to augment the Frequency Separation Technique
NASA Astrophysics Data System (ADS)
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
NASA Astrophysics Data System (ADS)
Florez, W. F.; Portapila, M.; Hill, A. F.; Power, H.; Orsini, P.; Bustamante, C. A.
2015-03-01
The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases.
Dowding, Kevin J.; Hills, Richard Guy
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
On shrinkage and model extrapolation in the evaluation of clinical center performance
Varewyck, Machteld; Goetghebeur, Els; Eriksson, Marie; Vansteelandt, Stijn
2014-01-01
We consider statistical methods for benchmarking clinical centers based on a dichotomous outcome indicator. Borrowing ideas from the causal inference literature, we aim to reveal how the entire study population would have fared under the current care level of each center. To this end, we evaluate direct standardization based on fixed versus random center effects outcome models that incorporate patient-specific baseline covariates to adjust for differential case-mix. We explore fixed effects (FE) regression with Firth correction and normal mixed effects (ME) regression to maintain convergence in the presence of very small centers. Moreover, we study doubly robust FE regression to avoid outcome model extrapolation. Simulation studies show that shrinkage following standard ME modeling can result in substantial power loss relative to the considered alternatives, especially for small centers. Results are consistent with findings in the analysis of 30-day mortality risk following acute stroke across 90 centers in the Swedish Stroke Register. PMID:24812420
Vector Extrapolation-Based Acceleration of Regularized Richardson Lucy Image Deblurring
NASA Astrophysics Data System (ADS)
Remmele, Steffen; Hesser, Jürgen
Confocal fluorescence microscopy has become an important tool in biological and medical sciences for imaging thin specimen, even living ones. Due to out-of-focus blurring and noise the acquired images are degraded and thus it is necessary to restore them. One of the most popular methods is an iterative Richardson-Lucy algorithm with total variation regularization. This algorithm while improving the image quality is converging slowly whereas with a constantly increasing amount of image data fast methods are required. In this paper, we present an accelerated version of the algorithm and investigate the achieved speed up. The acceleration method is based on a vector extrapolation technique and avoids a computational intensive evaluation of the underlying cost function. To evaluate the acceleration two synthetic test images are used. The accelerated algorithm reaches an acceptable result within 30% to 40% less computational time.
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. PMID:23179190
Age of Eocene/Oligocene boundary based on extrapolation from North American microtektite layer
Glass, B.P.; Crosbie, J.R.
1982-04-01
Microtektites believed to belong to the North American tektite strewn field have been found in upper Eocene sediments in cores from nine Deep Sea Drilling Project sites in the Caribbean Sea, Gulf of Mexico, equatorial Pacific, and eastern equatorial Indian Ocean. The microtektite layer has an age of 34.2 +- 0.6 m.y. based on fission-track dating of the microtektites and K-Ar and fission-track dating of the North American tektites. Extrapolation from the microtektite layer to the overlying Eocene/Oligocene boundary indicates an age of 32.3 +- 0.9 m.y. for the Eocene/Oligocene boundary as defined at each site in the Initial Reports of the Deep Sea Drilling Project. This age is approximately 5 m.y. younger than the age of 37.5 m.y. that is generally assigned to the boundary based on recently published Cenozoic time scales. 3 figures, 5 tables.
Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas
2008-01-01
Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.
Ordering of metal-ion toxicities in different species--extrapolation to man
England, M.W.; Turner, J.E.; Hingerty, B.E.; Jacobson, K.B. )
1989-01-01
Our previous attempts to predict the toxicities of 24 metal ions for a given species, using physicochemical parameters associated with the ions, are summarized. In our current attempt we have chosen indicators of toxicity for biological systems of increasing levels of complexity--starting with individual biological molecules and ascending to mice as representative of higher-order animals. The numerical values for these indicators have been normalized to a scale of 100 for Mg{sup 2+} (essentially nontoxic) and 0 for Cd{sup 2+} (very toxic). To give predicted toxicities to humans, extrapolations across biological species have been made for each of the metal ions considered. The predicted values are then compared with threshold limit values (TLV) from the literature. Both methods for predicting toxicities have their advantages and disadvantages, and both have limited success for metal ions. However, the second approach suggests that the TLV for Cu{sup 2+} should be lower than that currently recommended.
King, A W
1991-12-31
A general procedure for quantifying regional carbon dynamics by spatial extrapolation of local ecosystem models is presented Monte Carlo simulation to calculate the expected value of one or more local models, explicitly integrating the spatial heterogeneity of variables that influence ecosystem carbon flux and storage. These variables are described by empirically derived probability distributions that are input to the Monte Carlo process. The procedure provides large-scale regional estimates based explicitly on information and understanding acquired at smaller and more accessible scales.Results are presented from an earlier application to seasonal atmosphere-biosphere CO{sub 2} exchange for circumpolar ``subarctic`` latitudes (64{degree}N-90{degree}N). Results suggest that, under certain climatic conditions, these high northern ecosystems could collectively release 0.2 Gt of carbon per year to the atmosphere. I interpret these results with respect to questions about global biospheric sinks for atmospheric CO{sub 2} .
A two-grid method with Richardson extrapolation for a semilinear convection-diffusion problem
NASA Astrophysics Data System (ADS)
Tikhovskaya, S. V.; Zadorin, A. I.
2015-10-01
A boundary value problem for a second-order semilinear singularly perturbed ordinary differential equation is considered. We use Newton and Picard iterations for a linearization. To solve the problem at each iteration we apply the difference scheme with the property of uniform with respect to the singular perturbation parameter convergence. A modified Samarskii and central difference schemes on Shishkin mesh are considered. It is known that these schemes are almost second order accuracy uniformly with respect to the singular perturbation parameter. To decrease the required number of arithmetical operations for resolving the difference scheme, a two-grid method is proposed. To increase the accuracy of difference scheme, we investigate the possibility to apply Richardson extrapolation using known solutions of the difference scheme on both meshes. The comparison of modified Samarskii and central difference schemes is carried out. The results of some numerical experiments are discussed.
Extrapolation of Group Proximity from Member Relations Using Embedding and Distribution Mapping
NASA Astrophysics Data System (ADS)
Misawa, Hideaki; Horio, Keiichi; Morotomi, Nobuo; Fukuda, Kazumasa; Taniguchi, Hatsumi
In the present paper, we address the problem of extrapolating group proximities from member relations, which we refer to as the group proximity problem. We assume that a relational dataset consists of several groups and that pairwise relations of all members can be measured. Under these assumptions, the goal is to estimate group proximities from pairwise relations. In order to solve the group proximity problem, we present a method based on embedding and distribution mapping, in which all relational data, which consist of pairwise dissimilarities or dissimilarities between members, are transformed into vectorial data by embedding methods. After this process, the distributions of the groups are obtained. Group proximities are estimated as distances between distributions by distribution mapping methods, which generate a map of distributions. As an example, we apply the proposed method to document and bacterial flora datasets. Finally, we confirm the feasibility of using the proposed method to solve the group proximity problem.
NASA Astrophysics Data System (ADS)
Fernandes, Ryan I.; Fairweather, Graeme
2012-08-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.
Risk extrapolation for chlorinated methanes as promoters vs initiators of multistage carcinogenesis.
Bogen, K T
1990-10-01
"Cell-kinetic multistage" (CKM) models account for clonal growth of intermediate, premalignant cell populations and thus distinguish somatic mutations and cell proliferation as separate processes that may influence observed rates of tumor formation. This paper illustrates the application of two versions of a two-stage CKM model (one assuming exponential and the other geometric proliferation of intermediate cells) for extrapolating cancer risk potentially associated with exposure to carbon tetrachloride, chloroform, and dichloromethane, three suspect human carcinogens commonly present in trace amounts in groundwater supplies used for domestic consumption. For each compound, the models were used to calculate a daily oral "virtually safe dose" (VSD) to humans associated with a cancer risk of 10(-6), extrapolated from bioassay data on increased hepatocellular tumor incidence in B6C3F1 mice. Exposure-induced bioassay tumor responses were assumed first to be due solely to "promotion" (enhanced proliferation of premalignant cells, here associated with cytotoxicity), in accordance with the majority of available data on in vivo genotoxicity for these compounds. Available data were used to model dose response for induced hepatocellular proliferation in mice for each compound. Physiologically based pharmacokinetic models were used to predict the hepatotoxic effect (metabolized) dose as a function of parent compound administered dose in mice and in humans. Resulting calculated VSDs are shown to be from three to five orders of magnitude greater than corresponding values obtained assuming each of the compounds is carcinogenic only through induced somatic mutations within the CKM framework. Key issues and uncertainties in applying CKM models to risk assessment for cancer promoters are discussed. PMID:2258018
Jiang Chaowei; Feng Xueshang E-mail: fengx@spaceweather.ac.cn
2012-04-20
The magnetic field in the solar corona is usually extrapolated from a photospheric vector magnetogram using a nonlinear force-free field (NLFFF) model. NLFFF extrapolation needs considerable effort to be devoted to its numerical realization. In this paper, we present a new implementation of the magnetohydrodynamics (MHD) relaxation method for NLFFF extrapolation. The magnetofrictional approach, which is introduced for speeding the relaxation of the MHD system, is realized for the first time by the spacetime conservation-element and solution-element scheme. A magnetic field splitting method is used to further improve the computational accuracy. The bottom boundary condition is prescribed by incrementally changing the transverse field to match the magnetogram, and all other artificial boundaries of the computational box are simply fixed. We examine the code using two types of NLFFF benchmark tests, the Low and Lou semi-analytic force-free solutions and a more realistic solar-like case constructed by van Ballegooijen et al. The results show that our implementation is successful and versatile for extrapolations of either the relatively simple cases or the rather complex cases that need significant rebuilding of the magnetic topology, e.g., a flux rope. We also compute a suite of metrics to quantitatively analyze the results and demonstrate that the performance of our code in extrapolation accuracy basically reaches the same level of the present best-performing code, i.e., that developed by Wiegelmann.
Pelkonen, O; Turpeinen, M
2007-01-01
Although the measurement of metabolite formation or substrate depletion in in vitro systems, from recombinant enzymes to tissue slices, is a relatively routine task, there are a number of more or less unresolved issues in the extrapolation of the enzymatic intrinsic clearance into hepatic metabolic clearance. Nominal concentrations of the drug added to the incubation system are not necessarily the concentration the transporter or the metabolizing enzyme sees. In addition, peculiarities of incubation set-ups should be assessed. Unbound drug fractions (concentrations) in the in vitro system itself should be measured or estimated for the appropriate assessment of enzymatic intrinsic clearance. In addition, blood and/or plasma concentrations to be encountered in the in vivo situation should be measured or estimated for the extrapolation. Extrapolation always means making a number of assumptions and the most important of these, such as scaling factors from recombinant enzymes, microsomes or hepatocytes to the mass unit of the liver, liver weight, blood flow, and distribution volume amongst others, and so on, should be explicitly stated and included in the extrapolation process. Despite all the above-mentioned reservations the in vitro-in vivo extrapolation of metabolic clearance seems to be a useful and mostly fairly precise tool for predicting the important pharmacokinetic processes of a drug. PMID:17968737
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
Welding arc length control system
NASA Technical Reports Server (NTRS)
Iceland, William F. (Inventor)
1993-01-01
The present invention is a welding arc length control system. The system includes, in its broadest aspects, a power source for providing welding current, a power amplification system, a motorized welding torch assembly connected to the power amplification system, a computer, and current pick up means. The computer is connected to the power amplification system for storing and processing arc weld current parameters and non-linear voltage-ampere characteristics. The current pick up means is connected to the power source and to the welding torch assembly for providing weld current data to the computer. Thus, the desired arc length is maintained as the welding current is varied during operation, maintaining consistent weld penetration.
Softness Correlations Across Length Scales
NASA Astrophysics Data System (ADS)
Ivancic, Robert; Shavit, Amit; Rieser, Jennifer; Schoenholz, Samuel; Cubuk, Ekin; Durian, Douglas; Liu, Andrea; Riggleman, Robert
In disordered systems, it is believed that mechanical failure begins with localized particle rearrangements. Recently, a machine learning method has been introduced to identify how likely a particle is to rearrange given its local structural environment, quantified by softness. We calculate the softness of particles in simulations of atomic Lennard-Jones mixtures, molecular Lennard-Jones oligomers, colloidal systems and granular systems. In each case, we find that the length scale characterizing spatial correlations of softness is approximately a particle diameter. These results provide a rationale for why localized rearrangements--whose size is presumably set by the scale of softness correlations--might occur in disordered systems across many length scales. Supported by DOE DE-FG02-05ER46199.
Variable focal length deformable mirror
Headley, Daniel; Ramsey, Marc; Schwarz, Jens
2007-06-12
A variable focal length deformable mirror has an inner ring and an outer ring that simply support and push axially on opposite sides of a mirror plate. The resulting variable clamping force deforms the mirror plate to provide a parabolic mirror shape. The rings are parallel planar sections of a single paraboloid and can provide an on-axis focus, if the rings are circular, or an off-axis focus, if the rings are elliptical. The focal length of the deformable mirror can be varied by changing the variable clamping force. The deformable mirror can generally be used in any application requiring the focusing or defocusing of light, including with both coherent and incoherent light sources.
NASA Astrophysics Data System (ADS)
Antonio, Patrícia L.; Xavier, Marcos; Caldas, Linda V. E.
2014-11-01
The Calibration Laboratory (LCI) at the Instituto de Pesquisas Energéticas e Nucleares (IPEN) is going to establish a Böhm extrapolation chamber as a primary standard system for the dosimetry and calibration of beta radiation sources and detectors. This chamber was already tested in beta radiation beams with an aluminized Mylar entrance window, and now, it was characterized with an original Hostaphan entrance window. A comparison between the results of the extrapolation chamber with the two entrance windows was performed. The results showed that this extrapolation chamber presents the same effectiveness in beta radiation fields as a primary standard system with both entrance windows, showing that any one of them may be utilized.
Critical Length Limiting Superlow Friction
NASA Astrophysics Data System (ADS)
Ma, Ming; Benassi, Andrea; Vanossi, Andrea; Urbakh, Michael
2015-02-01
Since the demonstration of superlow friction (superlubricity) in graphite at nanoscale, one of the main challenges in the field of nano- and micromechanics was to scale this phenomenon up. A key question to be addressed is to what extent superlubricity could persist, and what mechanisms could lead to its failure. Here, using an edge-driven Frenkel-Kontorova model, we establish a connection between the critical length above which superlubricity disappears and both intrinsic material properties and experimental parameters. A striking boost in dissipated energy with chain length emerges abruptly due to a high-friction stick-slip mechanism caused by deformation of the slider leading to a local commensuration with the substrate lattice. We derived a parameter-free analytical model for the critical length that is in excellent agreement with our numerical simulations. Our results provide a new perspective on friction and nanomanipulation and can serve as a theoretical basis for designing nanodevices with superlow friction, such as carbon nanotubes.
NASA Astrophysics Data System (ADS)
Wohlers, M.; Huguenin, R.; Weinberg, M.; Huffman, R. E.; Eastes, R. W.; Delgreco, F. P.
1989-08-01
There is continuing need for information about the earth background clutter at ultraviolet wavelengths. The methodology and the results obtained at 1304 are described. A wavelength from an analysis of th AFGL Polar Bear Experiment. The basic measurement equipment provided data of a spatial resolution of 20 km over a large portion of the earth. The instrumentation also provided sampled outputs as the footprint scanned along the measurement track. The combination of the fine scanning and large area coverage provided opportunity for a spatial power spectral analysis that in turn provided a means for extrapolation to finer-spatial scale; a companion paper discusses the physical basis for this extrapolation.
New extrapolation method for low-lying states of nuclei in the sd and the pf shells
Shen, J. J.; Zhao, Y. M.; Arima, A.; Yoshinaga, N.
2011-04-15
We study extrapolation approaches to evaluate energies of low-lying states for nuclei in the sd and pf shells, by sorting the diagonal matrix elements of the nuclear shell-model Hamiltonian. We introduce an extrapolation method with perturbation and apply our new method to predict both low-lying state energies and E2 transition rates between low-lying states. Our predicted results arrive at an accuracy of the root-mean-squared deviations {approx}40-60 keV for low-lying states of these nuclei.
Method and apparatus for determining minority carrier diffusion length in semiconductors
Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.
1983-07-12
Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.
Hazards in determination and extrapolation of depositional rates of recent sediments
Isphording, W.C. . Dept. of Geology-Geography); Jackson, R.B. )
1992-01-01
Calculation of depositional rates for the past 250 years in estuarine sediments at sites in the Gulf of Mexico have been carried out by measuring changes that have taken place on bathymetric charts. Depositional rates during the past 50 to 100 years can similarly be estimated by this method and may be often confirmed by relatively abrupt changes at depth in the content of certain heavy metals in core samples. Analysis of bathymetric charts of Mobile Bay, Alabama, dating back to 1858, disclosed an essentially constant sedimentation rate of 3.9 mm/year. Apalachicola Bay, Florida, similarly, was found to have a rate of 5.4 mm/year. Though, in theory, these rates should provide reliable estimates of the influx of sediment into the estuaries, considerable caution must be used in attempting to extrapolate them to any depth in core samples. The passage of hurricanes in the Gulf of Mexico is a common event and can rapidly, and markedly, alter the bathymetry of an estuary. The passage of Hurricane Elena near Apalachicola Bay in 1985, for example, removed over 84 million tons of sediment from the bay and caused an average deepening of nearly 50 cm. The impact of Hurricane Frederick on Mobile Bay in 1979 was more dramatic. During the approximate 7 hour period when winds from this storm impacted the estuary, nearly 290 million tons of sediment was driven out of the bay and an average deepening of 46 cm was observed. With such weather events common in the Gulf Coast, it is not surprising that when radioactive age dating methods were used to obtain dates of approximately 7,500 years for organic remains in cores from Apalachicola Bay, that the depths at which the dated materials were obtained in the cores corresponded to depositional rates of only 0.4 mm/year, or one-tenth that obtained from historic bathymetric data. Because storm scour effects are a common occurrence in the Gulf, no attempt should be made to extrapolate bathymetric-derived rates to beyond the age of the charts.
Telomere length in Hepatitis C.
Kitay-Cohen, Y; Goldberg-Bittman, L; Hadary, R; Fejgin, M D; Amiel, A
2008-11-01
Telomeres are nucleoprotein structures located at the termini of chromosomes that protect the chromosomes from fusion and degradation. Hepatocyte cell-cycle turnover may be a primary mechanism of telomere shortening in hepatitis C virus (HCV) infection, inducing fibrosis and cellular senescence. HCV infection has been recognized as potential cause of B-cell lymphoma and hepatocellular carcinoma. The present study sought to assess relative telomere length in leukocytes from patients with chronic HCV infection, patients after eradication of HCV infection (in remission), and healthy controls. A novel method of manual evaluation was applied. Leukocytes derived from 22 patients with chronic HCV infection and age- and sex-matched patients in remission and healthy control subjects were subjected to a fluorescence-in-situ protocol (DAKO) to determine telomere fluorescence intensity and number. The relative, manual, analysis of telomere length was validated against findings on applied spectral imaging (ASI) in a random sample of study and control subjects. Leukocytes from patients with chronic HCV infection had shorter telomeres than leukocytes from patients in remission and healthy controls. On statistical analysis, more cells with low signal intensity on telomere FISH had shorter telomeres whereas more cells with high signal intensity had longer telomeres. The findings were corroborated by the ASI telomere software. Telomere shortening in leukocytes from patients with active HCV infection is probably due to the lower overall telomere level rather than higher cell cycle turnover. Manual evaluation is an accurate and valid method of assessing relative telomere length between patients with chronic HCV infection and healthy subjects. PMID:18992639
Precise Determination of the I = 2 Scattering Length from Mixed-Action Lattice QCD
Silas Beane; Paulo Bedaque; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud
2008-01-01
The I=2 pipi scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations (with fourth-rooted staggered sea quarks) at four light-quark masses. Two- and three-flavor mixed-action chiral perturbation theory at next-to-leading order is used to perform the chiral and continuum extrapolations. At the physical charged pion mass, we find m_pi a_pipi(I=2) = -0.04330 +- 0.00042, where the error bar combines the statistical and systematic uncertainties in quadrature.
The NIST Length Scale Interferometer
Beers, John S.; Penzes, William B.
1999-01-01
The National Institute of Standards and Technology (NIST) interferometer for measuring graduated length scales has been in use since 1965. It was developed in response to the redefinition of the meter in 1960 from the prototype platinum-iridium bar to the wavelength of light. The history of the interferometer is recalled, and its design and operation described. A continuous program of modernization by making physical modifications, measurement procedure changes and computational revisions is described, and the effects of these changes are evaluated. Results of a long-term measurement assurance program, the primary control on the measurement process, are presented, and improvements in measurement uncertainty are documented.
Feng, Edward H.; Crooks, Gavin E.
2008-08-21
An unresolved problem in physics is how the thermodynamic arrow of time arises from an underlying time reversible dynamics. We contribute to this issue by developing a measure of time-symmetry breaking, and by using the work fluctuation relations, we determine the time asymmetry of recent single molecule RNA unfolding experiments. We define time asymmetry as the Jensen-Shannon divergencebetween trajectory probability distributions of an experiment and its time-reversed conjugate. Among other interesting properties, the length of time's arrow bounds the average dissipation and determines the difficulty of accurately estimating free energy differences in nonequilibrium experiments.
Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou
2014-01-01
Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0-1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0-100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0-500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range. PMID:25464503
Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou
2014-01-01
Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0–1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0–100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0–500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range. PMID:25464503
Quantitative in vitro-to-in vivo extrapolation in a high-throughput environment.
Wetmore, Barbara A
2015-06-01
High-throughput in vitro toxicity screening provides an efficient way to identify potential biological targets for environmental and industrial chemicals while conserving limited testing resources. However, reliance on the nominal chemical concentrations in these in vitro assays as an indicator of bioactivity may misrepresent potential in vivo effects of these chemicals due to differences in clearance, protein binding, bioavailability, and other pharmacokinetic factors. Development of high-throughput in vitro hepatic clearance and protein binding assays and refinement of quantitative in vitro-to-in vivo extrapolation (QIVIVE) methods have provided key tools to predict xenobiotic steady state pharmacokinetics. Using a process known as reverse dosimetry, knowledge of the chemical steady state behavior can be incorporated with HTS data to determine the external in vivo oral exposure needed to achieve internal blood concentrations equivalent to those eliciting bioactivity in the assays. These daily oral doses, known as oral equivalents, can be compared to chronic human exposure estimates to assess whether in vitro bioactivity would be expected at the dose-equivalent level of human exposure. This review will describe the use of QIVIVE methods in a high-throughput environment and the promise they hold in shaping chemical testing priorities and, potentially, high-throughput risk assessment strategies. PMID:24907440
Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain.
Liu, Ke; Lv, Jialong; Dai, Yunchao; Zhang, Hong; Cao, Yingfei
2016-01-01
The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22) were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF) to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops) and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils. PMID:27518712
Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain
Liu, Ke; Lv, Jialong; Dai, Yunchao; Zhang, Hong; Cao, Yingfei
2016-01-01
The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22) were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF) to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops) and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils. PMID:27518712
NASA Astrophysics Data System (ADS)
Lee, Seung-Jae; Wentz, Elizabeth A.
2008-01-01
Understanding water use in the context of urban growth and climate variability requires an accurate representation of regional water use. It is challenging, however, because water use data are often unavailable, and when they are available, they are geographically aggregated to protect the identity of individuals. The present paper aims to map local-scale estimates of water use in Maricopa County, Arizona, on the basis of data aggregated to census tracts and measured only in the City of Phoenix. To complete our research goals we describe two types of data uncertainty sources (i.e., extrapolation and downscaling processes) and then generate data that account for the uncertainty sources (i.e., soft data). Our results ascertain that the Bayesian Maximum Entropy (BME) mapping method of modern geostatistics is a theoretically sound approach for assimilating the soft data into mapping processes. Our results lead to increased mapping accuracy over classical geostatistics, which does not account for the soft data. The confirmed BME maps therefore provide useful knowledge on local water use variability in the whole county that is further applied to the understanding of causal factors of urban water demand.
Comparison of Coronal Extrapolation Methods for Cycle 24 Using HMI Data
NASA Astrophysics Data System (ADS)
Arden, William M.; Norton, Aimee A.; Sun, Xudong; Zhao, Xuepu
2016-05-01
Two extrapolation models of the solar coronal magnetic field are compared using magnetogram data from the Solar Dynamics Observatory/Helioseismic and Magnetic Imager instrument. The two models, a horizontal current–current sheet–source surface (HCCSSS) model and a potential field–source surface (PFSS) model, differ in their treatment of coronal currents. Each model has its own critical variable, respectively, the radius of a cusp surface and a source surface, and it is found that adjusting these heights over the period studied allows for a better fit between the models and the solar open flux at 1 au as calculated from the Interplanetary Magnetic Field (IMF). The HCCSSS model provides the better fit for the overall period from 2010 November to 2015 May as well as for two subsets of the period: the minimum/rising part of the solar cycle and the recently identified peak in the IMF from mid-2014 to mid-2015 just after solar maximum. It is found that an HCCSSS cusp surface height of 1.7 R ⊙ provides the best fit to the IMF for the overall period, while 1.7 and 1.9 R ⊙ give the best fits for the two subsets. The corresponding values for the PFSS source surface height are 2.1, 2.2, and 2.0 R ⊙ respectively. This means that the HCCSSS cusp surface rises as the solar cycle progresses while the PFSS source surface falls.
Yoon, Miyoung; Clewell, Harvey J.
2016-01-01
Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited. PMID:26977255
NASA Astrophysics Data System (ADS)
Reggiannini, Ruggero
2015-12-01
This paper is concerned with spatial properties of linear arrays of antennas spaced less than half wavelength. Possible applications are in multiple-input multiple-output (MIMO) wireless links for the purpose of increasing the spatial multiplexing gain in a scattering environment, as well as in other areas such as sonar and radar. With reference to a receiving array, we show that knowledge of the received field can be extrapolated beyond the actual array size by exploiting the finiteness of the interval of real directions from which the field components impinge on the array. This property permits to increase the performance of the array in terms of angular resolution. A simple signal processing technique is proposed allowing formation of a set of beams capable to cover uniformly the entire horizon with an angular resolution better than that achievable by a classical uniform-weighing half-wavelength-spaced linear array. Results are also applicable to active arrays. As the above approach leads to arrays operating in super-directive regime, we discuss all related critical aspects, such as sensitivity to external and internal noises and to array imperfections, and bandwidth, so as to identify the basic design criteria ensuring the array feasibility.
Hartree-Fock mass formulas and extrapolation to new mass data
NASA Astrophysics Data System (ADS)
Goriely, S.; Samyn, M.; Heenen, P.-H.; Pearson, J. M.; Tondeur, F.
2002-08-01
The two previously published Hartree-Fock (HF) mass formulas, HFBCS-1 and HFB-1 (HF-Bogoliubov), are shown to be in poor agreement with new Audi-Wapstra mass data. The problem lies first with the prescription adopted for the cutoff of the single-particle spectrum used with the δ-function pairing force, and second with the Wigner term. We find an optimal mass fit if the spectrum is cut off both above EF+15 MeV and below EF-15 MeV, EF being the Fermi energy of the nucleus in question. In addition to the Wigner term of the form VW exp(-λ|N-Z|/A) already included in the two earlier HF mass formulas, we find that a second Wigner term linear in |N-Z| leads to a significant improvement in lighter nuclei. These two features are incorporated into our new Hartree-Fock-Bogoliubov model, which leads to much improved extrapolations. The 18 parameters of the model are fitted to the 2135 measured masses for N,Z>=8 with an rms error of 0.674 MeV. With this parameter set a complete mass table, labeled HFB-2, has been constructed, going from one drip line to the other, up to Z=120. The new pairing-cutoff prescription favored by the new mass data leads to weaker neutron-shell gaps in neutron-rich nuclei.
A community classification system for forest evaluation: Development, validation, and extrapolation.
Clatterbuck, W K
1996-01-01
A community classification system integrating vegetation and landforms was developed for the 8,054-ha Cheatham Wildlife Management Area (CWMA), located on the Western Highland Rim of Tennessee, USA, to obtain information on which to base multiresource land management decisions. A subjective procedure (synthesis tables) and several objective techniques (factor analysis, cluster analysis, and canonical discrimination) were used to evaluate importance values of overstory and midstory species, coverage values of understory species, and topographic parameters. These procedures were used collectively to guide and to provide evidence for interpretation of vegetational patterns on the landscape. The eight discrete communities identified on a 482-ha compartment within the CWMA were: northern red oak (Quercus rubra L.), chestnut oak (Q. prinus L.), scarlet oak (Q. coccinea Muenchh.), yellow-poplar (Liriodendron tulipifera L.), sycamore-sweetgum (Platanus occidentalis L.-Liquidambar styraciflua L.), black oak-hickory (Q. velutina Lam.-Carya spp.), post oak (Q. stellata Wangenh.), and American beech (Fagus grandifolia Ehrh.) communities. The classification system was validated with an independent data set. The eight communities were successfully extrapolated to an unsampled portion of the CWMA. Clearly, community analysis can become an important facet in forest management and may play a major role where a holistic understanding of vegetative relationships is essential. PMID:24198012
Cui, Jie; Krems, Roman V.; Li, Zhiying
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.
Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment
Burnham, A K; Weese, R K; Andrzejewski, W J
2005-03-10
Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.
NASA Astrophysics Data System (ADS)
Marshall, H.; Sturm, M.; Elder, K.; Yueh, S. H.
2013-12-01
Calibration and validation of radar remote sensing of snow requires information about bulk snow properties such as depth, density, and SWE. Microwave remote sensing observations are sensitive to these bulk properties, but also to the vertical profile of microstructure, requiring in-situ stratigraphic observations of density, grain size and grain shape. Depth measurements can be performed rapidly without excavation, however detailed vertical snow profile information requires careful observations by an experienced observer, which can only be performed in relatively few locations. We use high resolution ground-based radar to extrapolate snow stratigraphy and SWE from snowpits, quantifying the spatial variability of major layer boundaries which cause significant radar reflections, in addition to high resolution radar estimates of SWE, and relating these to in-situ snowpits. The sub-footprint variability is used to help interpret coincident airborne radar backscatter and other remote sensing observations collected during the ESA/NASA AlpSAR campaign in Austria and Alaska-Canada Campaign in 2013.
An extrapolation method for compressive strength prediction of hydraulic cement products
Siqueira Tango, C.E. de
1998-07-01
The basis for the AMEBA Method is presented. A strength-time function is used to extrapolate the predicted cementitious material strength for a late (ALTA) age, based on two earlier age strengths--medium (MEDIA) and low (BAIXA) ages. The experimental basis for the method is data from the IPT-Brazil laboratory and the field, including a long-term study on concrete, research on limestone, slag, and fly-ash additions, and quality control data from a cement factory, a shotcrete tunnel lining, and a grout for structural repair. The method applicability was also verified for high-performance concrete with silica fume. The formula for predicting late age (e.g., 28 days) strength, for a given set of involved ages (e.g., 28,7, and 2 days) is normally a function only of the two earlier ages` (e.g., 7 and 2 days) strengths. This equation has been shown to be independent on materials variations, including cement brand, and is easy to use also graphically. Using the AMEBA method, and only needing to know the type of cement used, it has been possible to predict strengths satisfactorily, even without the preliminary tests which are required in other methods.
Knasmüller, S; Steinkellner, H; Majer, B J; Nobis, E C; Scharf, G; Kassie, F
2002-08-01
It is well documented that dietary factors play a crucial role in the aetiology of human cancer and strong efforts have been made to identify protective (antimutagenic and anticarcinogenic) substances in foods. Although numerous studies have been published, it is problematic to use these results for the development of nutritional strategies. The aim of this article is a critical discussion of the pitfalls and problems associated with the search for protective compounds. The main obstacles in regard to the extrapolation of the data to the human situation arise from: (i) the use of inadequate experimental in vitro models, which do not reflect protective mechanisms in man and therefore give misleading results; (ii) the use of genotoxins and carcinogens that are not relevant for humans; (iii) the lack of knowledge about dose-effect relationships of DNA-protective and cancer protective dietary constituents; (iv) the use of exposure concentrations in animal models which exceed by far the human exposure levels; and finally (v) the lack of knowledge on the time-kinetics of protective effects. More relevant data can be expected from in vitro experiments with cells possessing inducible phase I and phase II enzymes, short-term in vivo models with laboratory animals which enable the measurement of effects in organs that are targets for tumour formation, and human biomonitoring studies in which endpoints are used that are related to DNA damage and cancer. PMID:12067564
Krishnan, Kannan; Haddad, Sami; Béliveau, Martin; Tardif, Robert
2002-12-01
The available data on binary interactions are yet to be considered within the context of mixture risk assessment because of our inability to predict the effect of a third or a fourth chemical in the mixture on the interacting binary pairs. Physiologically based pharmacokinetic (PBPK) models represent a potentially useful framework for predicting the consequences of interactions in mixtures of increasing complexity. This article highlights the conceptual basis and validity of PBPK models for extrapolating the occurrence and magnitude of interactions from binary to more complex chemical mixtures. The methodology involves the development of PBPK models for all mixture components and interconnecting them at the level of the tissue where the interaction is occurring. Once all component models are interconnected at the binary level, the PBPK framework simulates the kinetics of all mixture components, accounting for the interactions occurring at various levels in more complex mixtures. This aspect was validated by comparing the simulations of a binary interaction-based PBPK model with experimental data on the inhalation kinetics of m-xylene, toluene, ethyl benzene, dichloromethane, and benzene in mixtures of varying composition and complexity. The ability to predict the kinetics of chemicals in complex mixtures by accounting for binary interactions alone within a PBPK model is a significant step toward the development of interaction-based risk assessment for chemical mixtures. PMID:12634130
Octet baryon masses and sigma terms from an SU(3) chiral extrapolation
Young, Ross; Thomas, Anthony
2009-01-01
We analyze the consequences of the remarkable new results for octet baryon masses calculated in 2+1- avour lattice QCD using a low-order expansion about the SU(3) chiral limit. We demonstrate that, even though the simulation results are clearly beyond the power-counting regime, the description of the lattice results by a low-order expansion can be significantly improved by allowing the regularisation scale of the effective field theory to be determined by the lattice data itself. The model dependence of our analysis is demonstrated to be small compared with the present statistical precision. In addition to the extrapolation of the absolute values of the baryon masses, this analysis provides a method to solve the difficult problem of fine-tuning the strange-quark mass. We also report a determination of the sigma terms for all of the octet baryons, including an accurate value of the pion-nucleon sigma term and the first determination of the strangeness sigma term based on 2+1-flavour l
Telomere length and cardiovascular aging.
Fyhrquist, Frej; Saijonmaa, Outi
2012-06-01
Telomeres are located at the end of chromosomes. They are composed of repetitive TTAGGG tandem repeats and associated proteins of crucial importance for telomere function. Telomeric DNA is shortened by each cell division until a critical length is achieved and the cell enters senescence and eventually apoptosis. Telomeres are therefore considered a 'biological clock' of the cell. Telomerase adds nucleotides to telomeric DNA thereby contributing to telomere maintenance, genomic stability, functions, and proliferative capacity of the cell. In certain rare forms of progeria, point mutations within the telomere lead to accelerated telomere attrition and premature aging. Endogenous factors causing telomere shortening are aging, inflammation, and oxidative stress. Leukocyte telomere length (LTL) shortening is inhibited by estrogen and endogenous antioxidants. Accelerated telomere attrition is associated with cardiovascular risk factors such as age, gender, obesity, smoking, sedentary life-style, excess alcohol intake, and even mental stress. Cardiovascular (CV) diseases and CV aging are usually but not invariably associated with shorter telomeres than in healthy subjects. LTL appears to be a biomarker of CV aging, reflecting the cumulative burden of endogenous and exogenous factors negatively affecting LTL. Whether accelerated telomere shortening is cause or consequence of CV aging and disease is not clear. PMID:22713142
DOSE-RESPONSE BEHAVIOR OF ANDROGENIC AND ANTIANDROGENIC CHEMICALS: IMPLICATIONS FOR LOW-DOSE EXTRAPOLATION AND CUMULATIVE TOXICITY. LE Gray Jr, C Wolf, J Furr, M Price, C Lambright, VS Wilson and J Ostby. USEPA, ORD, NHEERL, EB, RTD, RTP, NC, USA.
Dose-response behavior of a...
Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...
There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...
NASA Astrophysics Data System (ADS)
He, Han; Wang, Huaning; Yan, Yihua
2011-01-01
The Hinode satellite can obtain high-quality photospheric vector magnetograms of solar active regions and the simultaneous coronal loop images in soft X-ray and extreme ultraviolet (EUV) bands. In this paper, we continue the work of He and Wang (2008) and apply the newly developed upward boundary integration computational scheme for the nonlinear force-free field (NLFFF) extrapolation of the coronal magnetic field to the photospheric vector magnetograms acquired by the Spectro-Polarimeter of the Solar Optical Telescope aboard Hinode. Three time series vector magnetograms of the same solar active region, NOAA 10930, are selected for the NLFFF extrapolations, which were observed within the time interval of 26 h during 10-11 December 2006 when the active region crossed the central area of the Sun's disk. Parallel computation of the NLFFF extrapolation code was realized through OpenMP multithreaded, shared memory parallelism and Fortran 95 programming language for the extrapolation calculations. The comparison between the extrapolated field lines and the coronal loop images obtained by the X-Ray Telescope and the EUV Imaging Spectrometer of Hinode shows that, in the central area of the active region, the field line configurations generally agree with the coronal images, and the orientations of the field lines basically coincide with the coronal loop observations for all three successive magnetograms. This result supports the NLFFF model being used for tracing the time series evolution of the 3-D coronal magnetic structures as the responses of the quasi-equilibrium solar atmosphere to the vector magnetic field changes in the photosphere.
Ligand chain length conveys thermochromism.
Ganguly, Mainak; Panigrahi, Sudipa; Chandrakumar, K R S; Sasmal, Anup Kumar; Pal, Anjali; Pal, Tarasankar
2014-08-14
Thermochromic properties of a series of non-ionic copper compounds have been reported. Herein, we demonstrate that Cu(II) ion with straight-chain primary amine (A) and alpha-linolenic (fatty acid, AL) co-jointly exhibit thermochromic properties. In the current case, we determined that thermochromism becomes ligand chain length-dependent and at least one of the ligands (A or AL) must be long chain. Thermochromism is attributed to a balanced competition between the fatty acids and amines for the copper(II) centre. The structure-property relationship of the non-ionic copper compounds Cu(AL)2(A)2 has been substantiated by various physical measurements along with detailed theoretical studies based on time-dependent density functional theory. It is presumed from our results that the compound would be a useful material for temperature-sensor applications. PMID:24943491
Geometry of area without length
NASA Astrophysics Data System (ADS)
Ho, Pei-Ming; Inami, Takeo
2016-01-01
To define a free string by the Nambu-Goto action, all we need is the notion of area, and mathematically the area can be defined directly in the absence of a metric. Motivated by the possibility that string theory admits backgrounds where the notion of length is not well defined but a definition of area is given, we study space-time geometries based on the generalization of a metric to an area metric. In analogy with Riemannian geometry, we define the analogues of connections, curvatures, and Einstein tensor. We propose a formulation generalizing Einstein's theory that will be useful if at a certain stage or a certain scale the metric is ill defined and the space-time is better characterized by the notion of area. Static spherical solutions are found for the generalized Einstein equation in vacuum, including the Schwarzschild solution as a special case.
Using composite flow laws to extrapolate lab data on ice to nature
NASA Astrophysics Data System (ADS)
de Bresser, Hans; Diebold, Sabrina; Durham, William
2013-04-01
The progressive evolution of the grain size distribution of deforming and recrystallizing Earth materials directly affects their rheological behaviour in terms of composite grain-size-sensitive (GSS, diffusion/grain boundary sliding) and grain-size-insensitive (GSI, dislocation) creep. After time, such microstructural evolution might result in strain progressing at a steady-state balance of mechanisms of GSS and GSI creep. In order to come to a meaningful rheological description of materials deforming by combined GSS and GSI mechanisms, composite flow laws are required that bring together individual, laboratory derived GSS and GSI flow laws, and that include full grain size distributions rather than single mean values representing the grain size. A composite flow law approach including grain size distributions has proven to be very useful in solving discrepancies between microstructural observations in natural calcite mylonites and extrapolations of relatively simple laboratory flow laws (Herwegh et al., 2005, J. Struct Geol., 27, 503-521). In the current study, we used previous and new laboratory data on the creep behavior of water ice to investigate if a composite flow law approach also results in better extrapolation of lab data to nature for ice. The new lab data resulted from static grain-growth experiments and from deformation experiments performed on samples with a starting grain size of either < 2 microns ("fine grained ice") or of 180-250 microns ("coarse grained ice"). The deformation experiments were performed in a special cryogenic Heard-type deformation apparatus at temperatures 180-240 K, at confining pressures 30-100 MPa, and strain rates between 1E-08/s and 1E-04/s. After the experiments, all samples were studied using cryogenic SEM and image analysis techniques. We also investigated natural microstructures in EPICA drilling ice core samples of Dronning Maud Land in Antartica. The temperature of the core ranges from 228 K at the surface to 272 K
Investigative and extrapolative role of microRNAs’ genetic expression in breast carcinoma
Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida
2016-01-01
MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review. PMID:27375730
Investigative and extrapolative role of microRNAs' genetic expression in breast carcinoma.
Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida
2016-01-01
MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review. PMID:27375730
NASA Astrophysics Data System (ADS)
Guo, Y.; Ding, M. D.; Wiegelmann, T.; Li, H.
2008-06-01
The photospheric vector magnetic field of the active region NOAA 10930 was obtained with the Solar Optical Telescope (SOT) on board the Hinode satellite with a very high spatial resolution (about 0.3''). Observations of the two-ribbon flare on 2006 December 13 in this active region provide us a good sample to study the magnetic field configuration related to the occurrence of the flare. Using the optimization method for nonlinear force-free field (NLFFF) extrapolation proposed by Wheatland et al. and recently developed by Wiegelmann, we derive the three-dimensional (3D) vector magnetic field configuration associated with this flare. The general topology can be described as a highly sheared core field and a quasi-potential envelope arch field. The core field clearly shows some dips supposed to sustain a filament. Free energy release in the flare, calculated by subtracting the energy contained in the NLFFF and the corresponding potential field, is 2.4 × 1031 ergs, which is ~2% of the preflare potential field energy. We also calculate the shear angles, defined as the angles between the NLFFF and potential field, and find that they become larger at some particular sites in the lower atmosphere, while they become significantly smaller in most places, implying that the whole configuration gets closer to the potential field after the flare. The Ca II H line images obtained with the Broadband Filter Imager (BFI) of the SOT and the 1600 Å images with the Transition Region and Coronal Explorer (TRACE) show that the preflare heating occurs mainly in the core field. These results provide evidence in support of the tether-cutting model of solar flares.
Mangrove litter fall: Extrapolation from traps to a large tropical macrotidal harbour
NASA Astrophysics Data System (ADS)
Metcalfe, Kristin N.; Franklin, Donald C.; McGuinness, Keith A.
2011-11-01
Mangrove litter is a major source of organic matter for detrital food chains in many tropical coastal ecosystems, but scant attention has been paid to the substantial challenges in sampling and extrapolation of rates of litter fall. The challenges arise due to within-stand heterogeneity including incomplete canopy cover, and canopy that is below the high tide mark. We sampled litter monthly for three years at 35 sites across eight mapped communities in the macrotidal Darwin Harbour, northern Australia. Totals were adjusted for mean community canopy cover and the occurrence of canopy below the high tide mark. The mangroves of Darwin Harbour generate an estimated average of 5.0 t ha -1 yr -1 of litter. This amount would have been overestimated by 32% had we not corrected for limited canopy cover and underestimated by 11% had we not corrected for foliage that is below the high tide mark. Had we made neither correction, we would have overestimated litter fall by 17%. Among communities, rates varied 2.6-fold per unit area of canopy, and 3.9-fold among unit area of community. Seaward fringe mangroves were the most productive per unit of canopy area but the canopy was relatively open; Tidal creek forest was the most productive per unit area of community. Litter fall varied 1.1-fold among years and 2.0-fold among months though communities exhibited a range of seasonalities. Our study may be the most extensively stratified and sampled evaluation of mangrove litter fall in a tropical estuary. We believe our study is also the first such assessment to explicitly deal with canopy discontinuities and demonstrates that failure to do so can result in considerable overestimation of mangrove productivity.
Cross-Species Extrapolation of Prediction Models for Cadmium Transfer from Soil to Corn Grain
Yang, Hua; Li, Zhaojun; Lu, Lu; Long, Jian; Liang, Yongchao
2013-01-01
Cadmium (Cd) is a highly toxic heavy metal for both plants and animals. The presence of Cd in agricultural soils is of great concern regarding its transfer in the soil-plant system. This study investigated the transfer of Cd (exogenous salts) from a wide range of Chinese soils to corn grain (Zhengdan 958). Through multiple stepwise regressions, prediction models were developed, with the combination of Cd bioconcentration factor (BCF) of Zhengdan 958 and soil pH, organic matter (OM) content, and cation exchange capacity (CEC). Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the pH of the soil was the most important factor that controlled Cd uptake and lower pH was more favorable for Cd bioaccumulation in corn grain. There was no significant difference among three prediction models in the different Cd levels. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2 folds and close to the solid line of 1∶1 relationship. Furthermore, these prediction models also reduced the measured BCF intra-species variability for all non-model corn species. Therefore, the prediction models established in this study can be applied to other non-model corn species and be useful for predicting the Cd bioconcentration in corn grain and assessing the ecological risk of Cd in different soils. PMID:24324636
Confusion about Cadmium Risks: The Unrecognized Limitations of an Extrapolated Paradigm
Bernard, Alfred
2015-01-01
Background Cadmium (Cd) risk assessment presently relies on tubular proteinuria as a critical effect and urinary Cd (U-Cd) as an index of the Cd body burden. Based on this paradigm, regulatory bodies have reached contradictory conclusions regarding the safety of Cd in food. Adding to the confusion, epidemiological studies implicate environmental Cd as a risk factor for bone, cardiovascular, and other degenerative diseases at exposure levels that are much lower than points of departure used for setting food standards. Objective The objective was to examine whether the present confusion over Cd risks is not related to conceptual or methodological problems. Discussion The cornerstone of Cd risk assessment is the assumption that U-Cd reflects the lifetime accumulation of the metal in the body. The validity of this assumption as applied to the general population has been questioned by recent studies revealing that low-level U-Cd varies widely within and between individuals depending on urinary flow, urine collection protocol, and recent exposure. There is also evidence that low-level U-Cd increases with proteinuria and essential element deficiencies, two potential confounders that might explain the multiple associations of U-Cd with common degenerative diseases. In essence, the present Cd confusion might arise from the fact that this heavy metal follows the same transport pathways as plasma proteins for its urinary excretion and the same transport pathways as essential elements for its intestinal absorption. Conclusions The Cd risk assessment paradigm needs to be rethought taking into consideration that low-level U-Cd is strongly influenced by renal physiology, recent exposure, and factors linked to studied outcomes. Citation Bernard A. 2016. Confusion about cadmium risks: the unrecognized limitations of an extrapolated paradigm. Environ Health Perspect 124:1–5; http://dx.doi.org/10.1289/ehp.1509691 PMID:26058085
NASA Astrophysics Data System (ADS)
Galvagno, M.; Wohlfahrt, G.
2015-12-01
Gross primary productivity (GPP) is a key term in the carbon cycle science. Being difficult or even impossible, at the ecosystem scale to directly quantify, various methods are used to estimate GPP, such as: eddy covariance CO2 flux partitioning, carbonyl sulfide exchange, sun-induced fluorescence, isotopes of CO2, and the photochemical reflectance index. The primary source of global GPP estimates is the FLUXNET project within which GPP is estimated in a consistent fashion through eddy covariance flux partitioning at more than 700 sites globally. Since the net ecosystem CO2 exchange (NEE) reflects net uptake during daytime, when photosynthesis exceeds respiration, and net emission during nighttime due to ecosystem respiration (RECO), the eddy covariance flux partitioning is based on the idea that daytime RECO may be inferred from nighttime NEE direct measurements, and consequently GPP can be obtained by subtracting RECO from NEE. However, the main assumption underlying this approach, which is that a temperature-dependent model of RECO parametrised based on nighttime temperatures may be extrapolated to daytime temperatures, has not been conclusively tested. This study investigates whether nighttime measurements of RECO provide unbiased estimates of daytime RECO. To this end we used ecosystem respiration chambers in a mountain grassland which, by keeping the vegetation in the dark during the measurement, allowed us to directly quantify RECO during both day and night. These data, pooled by day, night or day and night, were then used to parametrise temperature dependent models of RECO. Results show that day and night RECO do not follow the same relationship with temperature and that RECO inferred by using the nighttime parametrisation overestimates the true respiration. Potential reasons of this observed bias, like the overestimation of daytime mitochondrial respiration and implications for the quantification of GPP are discussed.
The P/Halley: Spatial distribution and scale lengths for C2, CN, NH2, and H2O
NASA Technical Reports Server (NTRS)
Fink, Uwe; Combi, Michael; Disanti, Michael A.
1991-01-01
From P/Halley long slit spectroscopic exposures on 12 dates, extending from Oct. 1985 to May 1986, spatial profiles were obtained for emissions by C2, CN, NH2, and OI(1D). Haser model scale lengths were fitted to these data. The extended time coverage allowed the checking for consistency between the various dates. The time varying production rate of P/Halley severely affected the profiles after perihelion, which is shown in two profile sequences on adjacent dates. Because of the time varying production rate, it was not possible to obtain reliable Haser model scale lengths after perihelion. The pre-perihelion analysis yielded Haser model scale lengths of sufficient consistency that they can be used for production rate determinations, whenever it is necessary to extrapolate from observed column densities within finite observing apertures. Results of scale lengths reduced to 1 AU are given and discussed.
Diagnostic extrapolation of gross primary production from flux tower sites to the globe
NASA Astrophysics Data System (ADS)
Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Baldocchi, Dennis; Luyssaert, Sebastiaan; Papale, Dario
2010-05-01
The uptake of atmospheric CO2 by plant photosynthesis is the largest global carbon flux and is thought of driving most terrestrial carbon cycle processes. While the photosynthesis processes at the leaf and canopy levels are quite well understood, so far only very crude estimates of its global integral, the Gross Primary Production (GPP) can be found in the literature. Existing estimates have been lacking sound empirical basis. Reasons for such limitations lie in the absence of direct estimates of ecosystem-level GPP and methodological difficulties in scaling local carbon flux measurements to global scale across heterogeneous vegetation. Here, we present global estimates of GPP based on different diagnostic approaches. These up-scaling schemes integrated high-resolution remote sensing products, such as land cover, the fraction of photosynthetically active radiation (fAPAR) and leaf-area index, with carbon flux measurements from the global network of eddy covariance stations (FLUXNET). In addition, meteorological datasets from diverse sources and river runoff observations were used. All the above-mentioned approaches were also capable of estimating uncertainties. With six novel or newly parameterized and highly diverse up-scaling schemes we consistently estimated a global GPP of 122 Pg C y-1. In the quantification of the total uncertainties, we considered uncertainties arising from the measurement technique and data processing (i.e. partitioning into GPP and respiration). Furthermore, we accounted for the uncertainties of drivers and the structural uncertainties of the extrapolation approach. The total propagation led to a global uncertainty of 15 % of the mean value. Although our mean GPP estimate of 122 Pg C y-1 is similar to the previous postulate by Intergovernmental Panel on Climate Change in 2001, we estimated a different variability among ecoregions. The tropics accounted for 32 % of GPP showing a greater importance of tropical ecosystems for the global carbon
Skeletal 212Pb retention following 224Ra injection: extrapolation of animal data to adult humans.
Schlenker, R A
1988-04-01
Two methods of interspecies extrapolation, one based on a correlation of skeletal 212Pb/224Ra with body weight, the other based on the mechanistic relationship between skeletal 212Pb/224Ra and reciprocal bone surface-to-volume ratio, lead to the conclusion that the retention of 212Pb in the adult human skeleton is approximately complete a few days after injection. The correlation-based method gives most probable values for 212Pb/224Ra of 1.0 and 1.1 at 2 d and 7 d after injection, compared with values of 1.05 and 1.27 expected at these same times if the retention of 212Pb were complete from the time of injection and if no 212Pb were in the injection solution. The range of values corresponding to one geometric standard error on either side of the most probable value is 0.87 to 1.21 at 2 d post-injection. With the method based on the reciprocal bone surface-to-volume ratio, the best estimate of 212Pb/224Ra at 2 d after injection is 0.88, equal to the value observed in young adult beagles. An alternative interpretation of the results of this latter method leads to the conclusion that retention is complete, with 212Pb/224Ra equal to 1.0 for a 212Pb-free injection solution and 1.1 for a solution containing 212Pb in secular equilibrium with 224Ra. This work, which uses 224Ra daughter product retention data from mice, rats and dogs following 224Ra injection, provides a scientific foundation for retention assumptions made in the calculation of mean skeletal dose for adult humans. There now appear to be few uncertainties in these latter dose values, stemming from inaccurate retention assumptions; but substantial uncertainties remain in the mean skeletal dose values for juveniles and in the endosteal tissue doses regardless of age. Risk coefficients such as those in the BEIR III report that give the lifetime probability of bone tumor induction per unit mean skeletal dose may be correct for adult humans but are probably too low for juveniles due to overestimation of juvenile
Baldwin, David H; Spromberg, Julann A; Collier, Tracy K; Scholz, Nathaniel L
2009-12-01
growth and size at ocean entry of juvenile chinook. The consequent reduction in individual survival over successive years reduces the intrinsic productivity (lambda) of a modeled ocean-type chinook population. Overall, we show that exposures to common pesticides may place important constraints on the recovery of ESA-listed salmon species, and that simple models can be used to extrapolate toxicological impacts across several scales of biological complexity. PMID:20014574
NASA Astrophysics Data System (ADS)
Aronson, E. L.; Helliker, B. R.; Strode, S. A.; Pawson, S.
2011-12-01
Global soil methane consumption was estimated using multiple regression-based parameterizations by vegetation type from a meta-dataset created from 780 published methane flux measurements. The average global estimates for soil consumption by extrapolation, without taking snow cover into account, totaled 54-60 Tg annually. The parameterizations were based on air temperature and precipitation output variables reported in the literature and gathered in the meta-dataset. These variables were matched to similar ones reported in the Goddard Earth Observing System (GEOS) global climate model. The methane uptake response to increasing precipitation and temperature varied between vegetation types. The parameterizations for methane fluxes by vegetation type were included in a 20 year, free-running, tagged-methane run of the GEOS-5 model constrained by real observations of sea surface temperature. Snow cover was assumed to block methane diffusion into the soil and therefore result in zero consumption of methane in snow-covered soils. The parameterization estimates was slightly higher than previous estimates of global methane consumption, at around 37 Tg annually. The resultant global surface methane concentration was then compared to observed methane concentrations from NOAA Global Monitoring Division sites worldwide, with varying agreement. The parameterization for the vegetation type "Needleleaf Trees" predicted methane consumption in a study site located in the NJ Pinelands, which was studied in 2009. The estimate of methane consumption by the vegetation type "Broadleaf Evergreen Trees" was found to have the greatest error, which may indicate that the factors on which the parameterization was based are of minor importance in regulating methane flux within this vegetation type. The results were compared to offline runs of the parameterizations without the snow-cover compensation, which resulted in global rates of almost double the methane consumption. Since there have been
Minimal length uncertainty and accelerating universe
NASA Astrophysics Data System (ADS)
Farmany, A.; Mortazavi, S. S.
2016-06-01
In this paper, minimal length uncertainty is used as a constraint to solve the Friedman equation. It is shown that, based on the minimal length uncertainty principle, the Hubble scale is decreasing which corresponds to an accelerating universe.
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
Lee, Jung-Won; Choi, Jeung-Yoon; Kang, Hong-Goo
2012-02-01
Knowledge-based speech recognition systems extract acoustic cues from the signal to identify speech characteristics. For channel-deteriorated telephone speech, acoustic cues, especially those for stop consonant place, are expected to be degraded or absent. To investigate the use of knowledge-based methods in degraded environments, feature extrapolation of acoustic-phonetic features based on Gaussian mixture models is examined. This process is applied to a stop place detection module that uses burst release and vowel onset cues for consonant-vowel tokens of English. Results show that classification performance is enhanced in telephone channel-degraded speech, with extrapolated acoustic-phonetic features reaching or exceeding performance using estimated Mel-frequency cepstral coefficients (MFCCs). Results also show acoustic-phonetic features may be combined with MFCCs for best performance, suggesting these features provide information complementary to MFCCs. PMID:22352523
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed. PMID:21818621
Code of Federal Regulations, 2014 CFR
2014-07-01
... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2012 CFR
2012-07-01
... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Code of Federal Regulations, 2013 CFR
2013-07-01
... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...
Zheng, R; Zhang, W; Li, Y; Huang, J; Yang, D
1998-02-01
The EDXRF extrapolate-regression method described in this paper combines regression method with the fundamental formula of fluorescence intensity. The contents of Ni and Pd in white karat gold jewellery were calculated theoretically according to the spectrum of the sample. The content of gold was deternined without standards. The precision was 0.1% and the deviation was 0.3% compared with AA. PMID:15810348
Zhu, Liqin; Zhang, Yuan; Yang, Jianwei; Wang, Yongming; Zhang, Jianlei; Zhao, Yuanyuan; Dong, Weilin
2016-08-01
This study developed a physiologically based pharmacokinetic (PBPK) model in intraabdominally infected rats and extrapolated it to humans to predict the levofloxacin pharmacokinetics and penetration into tissues. Twelve male rats with intraabdominal infections induced by Escherichia coli received a single dose of 50 mg/kg body weight of levofloxacin. Blood plasma was collected at 5, 10, 20, 30, 60, 120, 240, 480 and 1440 min after injection, respectively. A PBPK model was developed in rats and extrapolated to humans using GastroPlus software. The predictions were assessed by comparing predictions and observations. In the plasma concentration-versus-time profile of levofloxacin in rats, C max was 23.570 μg/ml at 5 min after intravenous injection, and t1/2 was 2.38 h. The plasma concentration and kinetics in humans were predicted and validated by the observed data. Levofloxacin penetrated and accumulated with high concentrations in the heart, liver, kidney, spleen, muscle and skin tissues in humans. The predicted tissue-to-plasma concentration ratios in abdominal viscera were between 1.9 and 2.3. When rat plasma concentrations were known, extrapolation of a PBPK model was a method to predict the drug pharmacokinetics and penetration in humans. Levofloxacin had good penetration into the liver, kidney and spleen as well as other tissues in humans. This pathological model extrapolation may provide a reference for the study of antiinfective PK/PD. In our study, levofloxacin penetrated well into abdominal organs. Also ADR monitoring should be implemented when using levofloxacin. PMID:25753830
Yang, JianWei; Zhang, Yuan; Wang, YongMing; Zhang, JianLei; Zhao, YuanYuan; Dong, WeiLin
2015-01-01
The aim of this study is to develop a physiologically based pharmacokinetic (PBPK) model in intra-abdominal infected rats, and extrapolate it to human to predict moxifloxacin pharmacokinetics profiles in various tissues in intra-abdominal infected human. 12 male rats with intra-abdominal infections, induced by Escherichia coli, received a single dose of 40 mg/kg body weight of moxifloxacin. Blood plasma was collected at 5, 10, 20, 30, 60, 120, 240, 480, 1440 min after drug injection. A PBPK model was developed in rats and extrapolated to human using GastroPlus software. The predictions were assessed by comparing predictions and observations. In the plasma concentration versus time profile of moxifloxcinin rats, Cmax was 11.151 µg/mL at 5 min after the intravenous injection and t1/2 was 2.936 h. Plasma concentration and kinetics in human were predicted and compared with observed datas. Moxifloxacin penetrated and accumulated with high concentrations in redmarrow, lung, skin, heart, liver, kidney, spleen, muscle tissues in human with intra-abdominal infection. The predicted tissue to plasma concentration ratios in abdominal viscera were between 1.1 and 2.2. When rat plasma concentrations were known, extrapolation of a PBPK model was a method to predict drug pharmacokinetics and penetration in human. Moxifloxacin has a good penetration into liver, kidney, spleen, as well as other tissues in intra-abdominal infected human. Close monitoring are necessary when using moxifloxacin due to its high concentration distribution. This pathological model extrapolation may provide reference to the PK/PD study of antibacterial agents. PMID:25729270
NASA Astrophysics Data System (ADS)
Gao, Qiong; Jiang, Zongfu; Liao, Tianhe; Song, Kaiyang
2010-11-01
The vector ɛ and ρ extrapolation methods are applied in accelerating the convergence of the Richardson-Lucy (R-L) algorithm and its damped version. The theory and implementation are discussed in detail, and relevant numerical results are given, including the cases of noise-free images and images corrupted by the Poisson noise. The results show that the vector ɛ and ρ extrapolations of 9 orders can speed the convergence quite efficiently, and the ρ(9) method is more powerful than the ɛ(9) method for noisy degraded images. The extra computation burden due to the extrapolation is limited, and is well paid back by the accelerated convergence. The performances of these two methods are compared with the famous automatic acceleration method. For noise-free degraded images, the vector ɛ(9) and ρ(9) methods are more stable than the automatic method. For noisy degraded images, the damped R-L algorithm accelerated by vector ρ(9) or automatic methods is more powerful, and the instability of the automatic method is restrained by the damping strategy. We explain the instability of the method in accelerating the normal R-L algorithm by the numerical noise due to its frequent applications in the run.
Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre
2016-01-15
Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. PMID:26606322
NASA Astrophysics Data System (ADS)
Jiang, Chao-Wei; Feng, Xue-Shang
2016-01-01
In the solar corona, the magnetic flux rope is believed to be a fundamental structure that accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of the magnetic field from boundary data has been the primary way to obtain fully three-dimensional magnetic information about the corona. As a result, the ability to reliably recover the coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code is examined with an analytical magnetic flux rope model proposed by Titov & Démoulin, which consists of a bipolar magnetic configuration holding a semi-circular line-tied flux rope in force-free equilibrium. By only using the vector field at the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field can be reconstructed with high accuracy. In particular, the magnetic topological interfaces formed between the flux rope and the surrounding arcade, i.e., the “hyperbolic flux tube” and “bald patch separatrix surface,” are also reliably reproduced. By this test, we demonstrate that our CESE-MHD-NLFFF code can be applied to recovering the magnetic flux rope in the solar corona as long as the vector magnetogram satisfies the force-free constraints.
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
Xia, Y.; Maier, A.; Berger, M.; Hornegger, J.; Bauer, S.
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior to a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on
Full-disk nonlinear force-free field extrapolation of SDO/HMI and SOLIS/VSM magnetograms
NASA Astrophysics Data System (ADS)
Tadesse, T.; Wiegelmann, T.; Inhester, B.; MacNeice, P.; Pevtsov, A.; Sun, X.
2013-02-01
Context. The magnetic field configuration is essential for understanding solar explosive phenomena, such as flares and coronal mass ejections. To overcome the unavailability of coronal magnetic field measurements, photospheric magnetic field vector data can be used to reconstruct the coronal field. Two complications of this approach are that the measured photospheric magnetic field is not force-free and that one has to apply a preprocessing routine to achieve boundary conditions suitable for the force-free modeling. Furthermore the nonlinear force-free extrapolation code should take uncertainties into account in the photospheric field data. They occur due to noise, incomplete inversions, or azimuth ambiguity-removing techniques. Aims: Extrapolation codes in Cartesian geometry for modeling the magnetic field in the corona do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, e.g., a single active region. Here we apply a method for nonlinear force-free coronal magnetic field modeling and preprocessing of photospheric vector magnetograms in spherical geometry using the optimization procedure to full disk vector magnetograms. We compare the analysis of the photospheric magnetic field and subsequent force-free modeling based on full-disk vector maps from Helioseismic and Magnetic Imager (HMI) onboard the solar dynamics observatory (SDO) and Vector Spectromagnetograph (VSM) of the Synoptic Optical Long-term Investigations of the Sun (SOLIS). Methods: We used HMI and VSM photospheric magnetic field measurements to model the force-free coronal field above multiple solar active regions, assuming magnetic forces to dominate. We solved the nonlinear force-free field equations by minimizing a functional in spherical coordinates over a full disk and excluding the poles. After searching for the optimum modeling parameters for the particular data sets, we compared the resulting nonlinear force-free model fields. We compared
Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.; Mundy, William R.; Eklund, Chris R.; Johnstone, Andrew F.M.; Mack, Cina M.; Pegram, Rex A.
2015-02-15
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC
Demonstration of ELM pacing by Pellet Injection on DIII-D and Extrapolation to ITER
Baylor, Larry R; Commaux, Nicolas JC; Jernigan, Thomas C; Parks, P. B.; Evans, T.E.; Osborne, T. H.; Strait, E. J.; Fenstermacher, M. E.; Lasnier, C. J.; Moyer, R.A.; Yu, J.H.
2010-01-01
Demonstration of ELM pacing by pellet injection on DIII-D and extrapolation to ITER<#_ftn1>* L.R. Baylor1, N. Commaux1, T.C. Jernigan1, P.B. Parks2, T.E. Evans2, T.H. Osborne2, E.J. Strait2, M.E. Fenstermacher3, C.J. Lasnier3, R.A. Moyer4, J.H. Yu4 1Oak Ridge National Laboratory, Oak Ridge, TN, USA 2General Atomics, San Diego, CA, USA 3 Lawrence Livermore National Laboratory, Livermore, CA, USA 4University of California San Diego, La Jolla, CA, USA Deuterium pellet injection has been used in experiments on the DIII-D tokamak to investigate the possibility of triggering small rapid edge localized modes (ELMs) in reactor relevant plasma regimes. ELMs have been observed to be triggered from small 1.8 mm pellets injected from all available locations and under all H-mode operating scenarios in DIII-D. Experimental details have shown that the ELMs are triggered before the pellets reach the top of the H-mode pedestal, implying that very small shallow penetrating pellets are sufficient to trigger ELMs. Fast camera images of the pellet entering the plasma from the low field side show a single plasma filament becoming visible near the pellet cloud and striking the outer vessel wall within 200 ms. Additional ejected filaments are then observed to subsequently reach the wall. The plasma stored energy loss from the pellet triggered ELMs is a function of the elapsed time after a previous ELM. Pellet ELM pacing has been proposed as a method to prevent large ELMs that can damage the ITER plasma facing components [1]. A demonstration of pacing of ELMs on DIII-D was made by injecting slow 14 Hz pellets on the low field side in an ITER shape plasma with low natural ELM frequency and a normalized b of 1.8. The non-pellet discharge natural ELM frequency was ~5 Hz with ELM energy losses up to 85 kJ (>10% of total stored energy) while the case with pellets was able to demonstrate >20 Hz ELMs with an average ELM energy loss less than 22 kJ (<3% of the total). The resulting ELM frequency
Klobusicky, Joseph J.; Aryasomayajula, Arun; Marko, Nicholas
2015-01-01
Efforts toward improving patient compliance in medication focus on either identifying trends in patient features or studying changes through an intervention. Our study seeks to provide an important link between these two approaches through defining trends of evolving compliance. In addition to using clinical covariates provided through insurance claims and health records, we also extracted census based data to provide socioeconomic covariates such as income and population density. Through creating quadrants based on periods of medicine intake, we derive several novel definitions of compliance. These definitions revealed additional compliance trends through considering refill histories later in a patient’s length of therapy. These results suggested that the link between patient features and compliance includes a temporal component, and should be considered in policymaking when identifying compliant subgroups. PMID:26958212
Torok, Aaron
2011-10-24
The {pi}{sup +}{Sigma}{sup +} and {pi}{sup +}{Xi}{sup 0} scattering lengths were calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks was used to perform the chiral extrapolations. To NNLO in the three-flavor chiral expansion, the kaon-baryon processes that were investigated show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be a{sub {pi}}{sup +}{sub {Sigma}}{sup +} = -0.197{+-}0.017 fm, and a{sub {pi}}{sup +}{sub {Xi}}{sup 0} = -0.098{+-}0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.
Controlling Arc Length in Plasma Welding
NASA Technical Reports Server (NTRS)
Iceland, W. F.
1986-01-01
Circuit maintains arc length on irregularly shaped workpieces. Length of plasma arc continuously adjusted by control circuit to maintain commanded value. After pilot arc is established, contactor closed and transfers arc to workpiece. Control circuit then half-wave rectifies ac arc voltage to produce dc control signal proportional to arc length. Circuit added to plasma arc welding machines with few wiring changes. Welds made with circuit cleaner and require less rework than welds made without it. Beads smooth and free of inclusions.
Dither Cavity Length Controller with Iodine Locking
NASA Astrophysics Data System (ADS)
Lawson, Marty; Eloranta, Ed
2016-06-01
A cavity length controller for a seeded Q-switched frequency doubled Nd:YAG laser is constructed. The cavity length controller uses a piezo-mirror dither voltage to find the optimum length for the seeded cavity. The piezo-mirror dither also dithers the optical frequency of the output pulse. [1]. This dither in optical frequency is then used to lock to an Iodine absorption line.
Measuring Crack Length in Coarse Grain Ceramics
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Ghosn, Louis J.
2010-01-01
Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.
Pi Bond Orders and Bond Lengths
ERIC Educational Resources Information Center
Herndon, William C.; Parkanyi, Cyril
1976-01-01
Discusses three methods of correlating bond orders and bond lengths in unsaturated hydrocarbons: the Pauling theory, the Huckel molecular orbital technique, and self-consistent-field techniques. (MLH)
Required length of guardrails before hazards.
Tomasch, E; Sinz, W; Hoschopf, H; Gobald, M; Steffan, H; Nadler, B; Nadler, F; Strnad, B; Schneider, F
2011-11-01
One way to protect against impacts during run-off-road accidents with infrastructure is the use of guardrails. However, real-world accidents indicate that vehicles can leave the road and end up behind the guardrail. These vehicles have no possibility of returning to the lane. Vehicles often end up behind the guardrail because the length of the guardrails installed before hazards is too short; this can lead to a collision with a shielded hazard. To identify the basic speed for determining the necessary length of guardrails, we analyzed the speed at which vehicles leave the roadway from the ZEDATU (Zentrale Datenbank Tödlicher Unfälle) real-world accidents database. The required length of guardrail was considered the length that reduces vehicle speed at a maximum theoretically possible deceleration of 0.3g behind the barrier based on real-world road departure speed. To determine the desired length of a guardrail ahead of a hazard, we developed a relationship between guardrail length and the speed at which vehicles depart the roadway. If the initial elements are flared away from the carriageway, the required length will be reduced by up to an additional 30% The ZEDATU database analysis showed that extending the current length of guardrails to the evaluated required length would reduce the number of fatalities among occupants of vehicles striking bridge abutments by approximately eight percent. PMID:21819841
Invariant length of a cosmic string
NASA Astrophysics Data System (ADS)
Anderson, Malcolm R.
1990-06-01
The world sheet of a cosmic string is characterized by a function l, invariant under both coordinate and gauge transformations, which can be interpreted as the ``invariant length'' of the string. In flat space, l reduces to the invariant length of Vachaspati and Vilenkin, and gives an upper bound for the actual length of the string, and a lower bound for its energy, as measured by any inertial observer. In curved spacetime, time variations in the invariant length divide naturally into two parts: one due to the tidal tensor at points exterior to the world sheet and one due to the tidal tensor at points on the world sheet itself.
NASA Astrophysics Data System (ADS)
You, Jiachun; Li, Guangcai; Liu, Xuewei; Han, Wengong; Zhang, Guangde
2016-03-01
Most depth extrapolation schemes are based on a one-way wave equation, which possesses limited ability to provide the true amplitude values of reflectors that are highly important for amplitude-versus-offset inversion. After analysing the weaknesses of current migration methods and explaining the reason why wavefields cannot be extrapolated using the full-wave equation in the depth direction, a full-wave-equation migration method based on a new seismic acquisition system is proposed to provide accurately dynamic information of reflection interfaces for migration. In this new seismic acquisition system, double sensor data are provided to solve the acoustic wave equation in the depth domain accurately. To test the performance of recovering the true amplitudes of the full-wave-equation migration, we used a single shot gather and several multiple shot gathers produced by a 2-D numerical modelling technique to demonstrate that our methodology provides better estimated true amplitudes than that of the conventional Kirchhoff and reverse time migration algorithms through comparison of the amplitudes of the target reflectors with its theoretical reflection coefficients. Because double sensors are applied to implement the full-wave-equation migration, it is necessary to study the perfect distance between the double sensors to diminish the migration error for future practical exploration. Based on the application of the full-wave-equation migration method to the first set of actual seismic data collected from our double sensor acquisition system, our proposed method yields higher imaging quality than that of conventional methods. Numerical experiments and actual seismic data show that our proposed method has built a new bridge between true amplitude common-shot migration and full-wave-equation depth extrapolation.
EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE
Jiang Chaowei; Feng Xueshang E-mail: fengx@spaceweather.ac.cn
2013-06-01
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.
Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response
NASA Astrophysics Data System (ADS)
Wohlers, M.; Huguenin, R.; Weinberg, M.; Huffman, R.; Eastes, R.
1989-12-01
This paper describes the methodology and the results obtained at 1304 A wavelength from an analysis of the AFGL Polar Bear experiment. The basic measurement equipment provided data of a spatial resolution of 20 km over a large portion of the earth. The instrumentation also provided sampled outputs as the footprint scanned along the measurement track. The combination of the fine scanning and large area coverage provided opportunity for a spatial power spectral analysis that in turn provided a means for extrapolation to finer spatial scale.
NASA Technical Reports Server (NTRS)
Cahill, J. F.; Connor, P. C.
1979-01-01
Pressure data from a number of previous wind tunnel and flight investigations of high speed transport type wings were analyzed with the intent of developing a procedure for extrapolating low Reynolds number data to flight conditions. These analyses produced a correlation of the development of trailing-edge separation resulting from increases in Mach number and/or angle of attack and show that scale effects on this correlated separation development and the resulting shock location changes fall into a regular and apparently universal pattern. Further studies appear warranted to refine the correlation through a detailed consideration of boundary layer characteristics, and to evaluate scale effects on supercritical wings.
Sarcomere length dependence of the force-velocity relation in single frog muscle fibers.
Granzier, H. L.; Burns, D. H.; Pollack, G. H.
1989-01-01
The force-velocity relation of single frog fibers was measured at sarcomere lengths of 2.15, 2.65, and 3.15 microns. Sarcomere length was obtained on-line with a system that measures the distance between two markers attached to the surface of the fiber, approximately 800 microns apart. Maximal shortening velocity, determined by extrapolating the Hill equation, was similar at the three sarcomere lengths: 6.5, 6.0, and 5.7 microns/s at sarcomere lengths of 2.15, 2.65, and 3.15 microns, respectively. For loads not close to zero the shortening velocity decreased with increasing sarcomere length. This was the case when force was expressed as a percentage of the maximal force at optimal fiber length or as a percentage of the sarcomere-isometric force at the respective sarcomere lengths. The force-velocity relation was discontinuous around zero velocity: load clamps above the level that kept sarcomeres isometric resulted in stretch that was much slower than when the load was decreased below isometric by a similar amount. We fitted the force-velocity relation for slow shortening (less than 600 nm/s) and for slow stretch (less than 200 nm/s) with linear regression lines. At a sarcomere length of 2.15 microns the slopes of these lines was 8.6 times higher for shortening than for stretch. At 2.65 and 3.15 microns the values were 21.8 and 14.1, respectively. At a sarcomere length of 2.15 microm, the velocity of stretch abruptly increased at loads that were 160-170% of the sarcomere isometric load, i.e., the muscle yielded. However, at a sarcomere length of 2.65 and 3.15 microm yield was absent at such loads. Even the highest loads tested (260%) resulted in only slow stretch. It is concluded that properties of the force generators change with sarcomere length. This is not anticipated by the cross-bridge model of muscle contraction. PMID:2784695
Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko
2015-01-01
Introduction Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results and Discussion Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal. PMID:26629702
LENGTH SCALE OF TURBULENCE ABOVE ROUGH SURFACES
Results of analyses of data for two urban sites and a rural site suggest that the mixing length can be represented by the integral length scale of the turbulence derived from vertical velocity spectra. The result is apparently universal and permits the shear production of turbule...
The chain-length dependence test.
Stone, Matthew T; Heemstra, Jennifer M; Moore, Jeffrey S
2006-01-01
Trends obtained from systematic studies based on chain-length variation have provided valuable insight and understanding into the behavior of m-phenylene ethynylene foldamers. The generalization of this experimental approach, the chain-length dependence test, is useful for studying solution conformation, packing in the solid state, specific intrachain interactions, and the contributions of end groups to a particular property. PMID:16411735
Telomere length in early life predicts lifespan
Heidinger, Britt J.; Blount, Jonathan D.; Boner, Winnie; Griffiths, Kate; Metcalfe, Neil B.; Monaghan, Pat
2012-01-01
The attrition of telomeres, the ends of eukaryote chromosomes, is thought to play an important role in cell deterioration with advancing age. The observed variation in telomere length among individuals of the same age is therefore thought to be related to variation in potential longevity. Studies of this relationship are hampered by the time scale over which individuals need to be followed, particularly in long-lived species where lifespan variation is greatest. So far, data are based either on simple comparisons of telomere length among different age classes or on individuals whose telomere length is measured at most twice and whose subsequent survival is monitored for only a short proportion of the typical lifespan. Both approaches are subject to bias. Key studies, in which telomere length is tracked from early in life, and actual lifespan recorded, have been lacking. We measured telomere length in zebra finches (n = 99) from the nestling stage and at various points thereafter, and recorded their natural lifespan (which varied from less than 1 to almost 9 y). We found telomere length at 25 d to be a very strong predictor of realized lifespan (P < 0.001); those individuals living longest had relatively long telomeres at all points at which they were measured. Reproduction increased adult telomere loss, but this effect appeared transient and did not influence survival. Our results provide the strongest evidence available of the relationship between telomere length and lifespan and emphasize the importance of understanding factors that determine early life telomere length. PMID:22232671
Precise Measurement of Effective Focal Length
NASA Technical Reports Server (NTRS)
Wise, T. D.; Young, J. B.
1983-01-01
Computerized instrument measures effective focal lengths to 0.01 percent accuracy. Laser interferometers measure mirror angle and stage coordinate y in instrument for accurate measurment of focal properties of optical systems. Operates under computer control to measure effective focal length, focal surface shape, modulation transfer function, and astigmatism.
Jing, Ju; Liu, Chang; Lee, Jeongwoo; Wang, Shuo; Xu, Yan; Wang, Haimin; Wiegelmann, Thomas
2014-03-20
Dynamic phenomena indicative of slipping reconnection and magnetic implosion were found in a time series of nonlinear force-free field (NLFFF) extrapolations for the active region 11515, which underwent significant changes in the photospheric fields and produced five C-class flares and one M-class flare over five hours on 2012 July 2. NLFFF extrapolation was performed for the uninterrupted 5 hour period from the 12 minute cadence vector magnetograms of the Helioseismic and Magnetic Imager on board the Solar Dynamic Observatory. According to the time-dependent NLFFF model, there was an elongated, highly sheared magnetic flux rope structure that aligns well with an Hα filament. This long filament splits sideways into two shorter segments, which further separate from each other over time at a speed of 1-4 km s{sup –1}, much faster than that of the footpoint motion of the magnetic field. During the separation, the magnetic arcade arching over the initial flux rope significantly decreases in height from ∼4.5 Mm to less than 0.5 Mm. We discuss the reality of this modeled magnetic restructuring by relating it to the observations of the magnetic cancellation, flares, a filament eruption, a penumbra formation, and magnetic flows around the magnetic polarity inversion line.
Jackson, M.L.W.; Finley, R.J.
1992-08-01
An analysis of infield completions and reserve growth potential was made in Tertiary nonassociated gas reservoirs in South Texas. Infield well completions were defined from a concurrent GRI project involving macro-scale prediction of reserve growth. The report validates 78 percent, or 5.6 Tcf, of a high-end infill estimate of 7.2 Tcf for nine stratigraphic units in South Texas. This is a significant resource volume given the historical expectation that natural gas can be efficiently drained with widely spaced wells (1 or 2 per square mile) in conventional reservoirs. Groups of infield completions, or reservoir sections, from Frio, Vicksburg, Wilcox, and Miocene reservoirs were examined using geophysical well logs and production and pressure analyses. Seven reservoir-section types that contributed to the macro reserve growth estimate were evaluated. About 20 percent of the estimate consists of gas volumes extrapolated using consolidated reservoir groups, cycled reservoirs with invalid data. Additional gas volumes in the estimate were extrapolated from reservoir sections representing rate acceleration. The estimate also includes reservoir volumes from the low-permeability Wilcox Lobo trend, where limited drainage radii lead to expected reserve growth. Volumes that represent within-reservoir reserve growth and volumes that represent shallower- or deeper-pool reservoirs determined not to be in pressure communication with preceding completions in a reservoir section formed most of the macro reserve growth estimate.
NASA Astrophysics Data System (ADS)
Liu, Ning; Li, Weiliang; Zhao, Dongxue
2016-03-01
During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low-quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.
NASA Astrophysics Data System (ADS)
Liu, Ning; Li, Weiliang; Zhao, Dongxue
2016-06-01
During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low-quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.
Reynolds, S L; Fussell, R J; MacArthur, R
2005-01-01
Field trials were initiated to investigate if extrapolation procedures, which were adopted to limit costs of pesticide registration for minor crops, are valid. Three pairs of crops of similar morphology; carrots/swedes, cauliflower/calabrese (broccoli) and French beans/edible-podded peas; were grown in parallel at four different geographical locations within the UK. The crops were treated with both systemic and non-systemic pesticides under maximum registered use conditions, i.e. the maximum permitted application rates and the minimum harvest intervals. Once mature, the crops were harvested and analysed for residues of the applied pesticides. The limits of quantification were in the range 0.005-0.02 mg kg(-1). Analysis of variance and bootstrap estimates showed that in general, the mean residue concentrations for the individual pesticides were significantly different between crop pairs grown on each site. Similarly, the mean residue concentrations of most of the pesticides in each crop across sites were significantly different. These findings demonstrate that the extrapolations of residue levels for most of the selected pesticide/crop combinations investigated; chlorfenvinphos and iprodione from carrots to swedes; carbendazim, chlorpyrifos, diflubenzuron and dimethoate from cauliflower to calabrese; and malathion, metalaxyl and pirimicarb from French beans to edible-podded peas; appear invalid. PMID:15895609
NASA Astrophysics Data System (ADS)
Benstock, Daniel; Cegla, Frederic
2015-03-01
Ultrasonic thickness C-scans are a key tool in the assessment of the condition of engineering components. C-scans provide information of the wall thickness over the entire inspected area. Full inspection of a component is time consuming, costly and sometimes impossible due to obstacles. Therefore, the condition of the whole structure is often estimated by extrapolation of data from a small sample where C-scan information is available. Extreme value theory (EVT) provides a framework by which one can extrapolate to the size of the worst case defect from a small inspected sample area of a component. The framework and assumptions of EVT are discussed, with experimental and simulated examples. The influence of both the surface roughness and the timing algorithm, used to extract thickness measurements from the collected ultrasonic signals, is also analyzed. It can be shown that for uniformly rough surfaces the C-scan data can lead to conservative estimates of the size of the worst case defect.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Purpose Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. Methods DKI was performed in patients with Parkinson’s disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Results Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. Conclusion We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references. PMID:26528541
Tremblay, Gabriel; Livings, Christopher; Crowe, Lydia; Kapetanakis, Venediktos; Briggs, Andrew
2016-01-01
Background Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. Objectives This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS) in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Methods Data from 392 patients (lenvatinib: 261, placebo: 131) from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. Results A piecewise model, in which the Kaplan–Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. Conclusion In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and populate future cost-effectiveness analyses. PMID:27418847
Automatic Control Of Length Of Welding Arc
NASA Technical Reports Server (NTRS)
Iceland, William F.
1991-01-01
Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.
Spectral attenuation length of scintillating fibers
NASA Astrophysics Data System (ADS)
Drexlin, Guido; Eberhard, Veit; Hunkel, Dirk; Zeitnitz, B.
1995-02-01
A double spectrometer allows the precise measurement of the spectral attenuation length of scintillating fibers. Exciting the fibers with a N 2-laser at different points and measuring the wavelength dependent light intensity on both ends of the fiber simultaneously, enables a measurement of the attenuation length which is practically independent of systematic uncertainties. The experimental setup can additionally be used for the measurement of the relative light output. Six types of scintillating fibers from four manufactures (Bicron, Kuraray, Pol.Hi.Tech, and Plastifo) were tested. For different fibers the wavelength dependent attenuation lengths were measured from 0.3 m up to 20 m with an accuracy as good as 1%.
Regulation of Flagellar Length in Chlamydomonas
Wilson, Nedra F.; Iyer, Janaki Kannan; Buchheim, Julie A.; Meek, William
2008-01-01
Chlamydomonas reinhardtii has two apically localized flagella that are maintained at an equal and appropriate length. Assembly and maintenance of flagella requires a microtubule-based transport system known as intraflagellar transport (IFT). During IFT, proteins destined for incorporation into or removal from a flagellum are carried along doublet microtubules via IFT particles. Regulation of IFT activity therefore is pivotal in determining the length of a flagellum. Reviewed is our current understanding of the role of IFT and signal transduction pathways in the regulation of flagellar length. PMID:18692148
A Note on Solar Cycle Length Estimates
NASA Astrophysics Data System (ADS)
Vaquero, J. M.; García, J. A.; Gallego, M. C.
2006-05-01
Recently, new estimates of the solar cycle length (SCL) have been calculated using the Zurich Sunspot Number (R Z) and the Regression-Fourier-Calculus (RFC)-method, a mathematically rigorous method involving multiple regression, Fourier approximation, and analytical expressions for the first derivative. In this short contribution, we show estimates of the solar cycle length using the RFC-method and the Group Sunspot Number (R G) instead the R Z. Several authors have showed the advantages of R G for the analysis of sunspot activity before 1850. The use of R G solves some doubtful solar cycle length estimates obtained around 1800 using R Z.
Meson-Baryon Scattering Lengths from Mixed-Action Lattice QCD
Will Detmold, William Detmold, Konstantinos Orginos, Aaron Torok, Silas R Beane, Thomas C Luu, Assumpta Parreno, Martin Savage, Andre Walker-Loud
2010-04-01
The $\\pi^+\\Sigma^+$, $\\pi^+\\Xi^0$ , $K^+p$, $K^+n$, and $K^0 \\Xi^0$ scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks is used to perform the chiral extrapolations. We find no convergence for the kaon-baryon processes in the three-flavor chiral expansion. Using the two-flavor chiral expansion, we find $a_{\\pi^+\\Sigma^+} = ?0.197 ± 0.017$ fm, and $a_{\\pi^+\\Xi^0} = ?0.098 0.017$ fm, where the comprehensive error includes statistical and systematic uncertainties.
Prediction of Length-of-day Variations Based on Gaussian Processes
NASA Astrophysics Data System (ADS)
Lei, Yu; Zhao, Dan-Ning; Gao, Yu-Ping; Cai, Hong-Bing
2015-07-01
Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional linear models for the prediction of the LOD variations, such as the least squares extrapolation model, the time-series analysis model and so on, cannot satisfy the requirements for the real-time and high-precision applications. In this paper, a new machine learning algorithm - the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction accuracy is analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN), and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; López-Vicente, Manuel; Palazón, Leticia; Quijano, Laura; Navas, Ana
2015-04-01
The use of fallout radionuclides, particularly 137Cs, in soil erosion investigations has been successfully used over a range of different landscapes. This technique provides mean annual values of spatially distributed soil erosion and deposition rates for the last 40-50 years. However, upscaling the data provided by fallout radionuclides to catchment level is required to understand soil redistribution processes, to support catchment management strategies, and to assess the main soil erosion factors like vegetation cover or topography. In recent years, extrapolating field scale soil erosion rates estimated from 137Cs data to catchment scale has been addressed using geostatistical interpolation and Geographical Information Systems (GIS). This study aims to assess soil redistribution in an agroforestry catchment characterized by abrupt topography and an intricate mosaic of land uses using 137Cs data and GIS. A new methodological approach using GIS is presented as an alternative of interpolation tools to extrapolating soil redistribution rates in complex landscapes. This approach divides the catchment into Homogeneous Physiographic Units (HPUs) based on unique land use, hydrological network and slope value. A total of 54 HPUs presenting specific land use, strahler order and slope combinations, were identified within the study area (2.5 km2) located in the north of Spain. Using 58 soil erosion and deposition rates estimated from 137Cs data, we were able to characterize the predominant redistribution processes in 16 HPUs, which represent the 78% of the study area surface. Erosion processes predominated in 6 HPUs (23%) which correspond with cultivated units in which slope and strahler order is moderate or high, and with scrubland units with high slope. Deposition was predominant in 3 HPUs (6%), mainly in riparian areas, and to a lesser extent in forest and scrubland units with low slope and low and moderate strahler order. Redistribution processes, both erosion and
Characteristic length of the knotting probability revisited
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2015-09-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(-N/NK), where the estimates of parameter NK are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius rex, i.e. the screening length of double-stranded DNA.
Impedance of finite length resistive cylinder
NASA Astrophysics Data System (ADS)
Krinsky, S.; Podobedov, B.; Gluckstern, R. L.
2004-11-01
We determine the impedance of a cylindrical metal tube (resistor) of radius a, length g, and conductivity σ attached at each end to perfect conductors of semi-infinite length. Our main interest is in the asymptotic behavior of the impedance at high frequency (k≫1/a). In the equilibrium regime, ka2≪g, the impedance per unit length is accurately described by the well-known result for an infinite length tube with conductivity σ. In the transient regime, ka2≫g, where the contribution of transition radiation arising from the discontinuity in conductivity is important, we derive an analytic expression for the impedance and compute the short-range wakefield. The analytic results are shown to agree with numerical evaluation of the impedance.
Method of continuously determining crack length
NASA Technical Reports Server (NTRS)
Prabhakaran, Ramamurthy (Inventor); Lopez, Osvaldo F. (Inventor)
1993-01-01
The determination of crack lengths in an accurate and straight forward manner is very useful in studying and preventing load created flaws and cracks. A crack length sensor according to the present invention is fabricated in a rectangular or other geometrical form from a conductive powder impregnated polymer material. The long edges of the sensor are silver painted on both sides and the sensor is then bonded to a test specimen via an adhesive having sufficient thickness to also serve as an insulator. A lead wire is connected to each of the two outwardly facing silver painted edges. The resistance across the sensor changes as a function of the crack length in the specimen and sensor. The novel aspect of the present invention includes the use of relatively uncomplicated sensors and instrumentation to effectively measure the length of generated cracks.
Phase coherence length in silicon photonic platform.
Yang, Yisu; Ma, Yangjin; Guan, Hang; Liu, Yang; Danziger, Steven; Ocheltree, Stewart; Bergman, Keren; Baehr-Jones, Tom; Hochberg, Michael
2015-06-29
We report for the first time two typical phase coherence lengths in highly confined silicon waveguides fabricated in a standard CMOS foundry's multi-project-wafer shuttle run in the 220nm silicon-on-insulator wafer with 248nm lithography. By measuring the random phase fluctuations of 800 on-chip silicon Mach-Zehnder interferometers across the wafer, we extracted, with statistical significance, the coherence lengths to be 4.17 ± 0.42 mm and 1.61 ± 0.12 mm for single mode strip waveguide and rib waveguide, respectively. We present a new experimental method to quantify the phase coherence length. The theory model is verified by both our and others' experiments. Coherence length is expected to become one key parameter of the fabrication non-uniformity to guide the design of silicon photonics. PMID:26191700
Carbon Nanotubes: Measuring Dispersion and Length
Fagan, Jeffrey A.; Bauer, Barry J.; Hobbie, Erik K.; Becker, Matthew L.; Hight-Walker, Angela; Simpson, Jeffrey R.; Chun, Jaehun; Obrzut, Jan; Bajpai, Vardhan; Phelan, Fred R.; Simien, Daneesh; Yeon Huh, Ji; Migler, Kalman B.
2011-03-01
Advanced technological uses of single-wall carbon nanotubes (SWCNTs) rely on the production of single length and chirality populations that are currently only available through liquid phase post processing. The foundation of all of these processing steps is the attainment of individualized nanotube dispersion in solution; an understanding of the collodial properties of the dispersed SWCNTs can then be used to designed appropriate conditions for separations. In many instances nanotube size, particularly length, is especially active in determining the achievable properties from a given population, and thus there is a critical need for measurement technologies for both length distribution and effective separation techniques. In this Progress Report, we document the current state of the art for measuring dispersion and length populations, including separations, and use examples to demonstrate the desirability of addressing these parameters.
Cold bose gases with large scattering lengths.
Cowell, S; Heiselberg, H; Mazets, I E; Morales, J; Pandharipande, V R; Pethick, C J
2002-05-27
We calculate the energy and condensate fraction for a dense system of bosons interacting through an attractive short range interaction with positive s-wave scattering length a. At high densities n>a(-3), the energy per particle, chemical potential, and square of the sound speed are independent of the scattering length and proportional to n(2/3), as in Fermi systems. The condensate is quenched at densities na(3) approximately 1. PMID:12059466
Electron Effective-Attenuation-Length Database
National Institute of Standards and Technology Data Gateway
SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge) This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).
Nucleosome repeat lengths and columnar chromatin structure.
Trifonov, Edward N
2016-06-01
Thorough quantitative study of nucleosome repeat length (NRL) distributions, conducted in 1992 by J. Widom, resulted in a striking observation that the linker lengths between the nucleosomes are quantized. Comparison of the NRL average values with the MNase cut distances predicted from the hypothetical columnar structure of chromatin (this work) shows a close correspondence between the two. This strongly suggests that the NRL distribution, actually, reflects the dominant role of columnar chromatin structure common for all eukaryotes. PMID:26208520
Fragment Length of Circulating Tumor DNA
Underhill, Hunter R.; Kitzman, Jacob O.; Hellwig, Sabine; Welker, Noah C.; Daza, Riza; Gligorich, Keith M.; Rostomily, Robert C.; Shendure, Jay
2016-01-01
Malignant tumors shed DNA into the circulation. The transient half-life of circulating tumor DNA (ctDNA) may afford the opportunity to diagnose, monitor recurrence, and evaluate response to therapy solely through a non-invasive blood draw. However, detecting ctDNA against the normally occurring background of cell-free DNA derived from healthy cells has proven challenging, particularly in non-metastatic solid tumors. In this study, distinct differences in fragment length size between ctDNAs and normal cell-free DNA are defined. Human ctDNA in rat plasma derived from human glioblastoma multiforme stem-like cells in the rat brain and human hepatocellular carcinoma in the rat flank were found to have a shorter principal fragment length than the background rat cell-free DNA (134–144 bp vs. 167 bp, respectively). Subsequently, a similar shift in the fragment length of ctDNA in humans with melanoma and lung cancer was identified compared to healthy controls. Comparison of fragment lengths from cell-free DNA between a melanoma patient and healthy controls found that the BRAF V600E mutant allele occurred more commonly at a shorter fragment length than the fragment length of the wild-type allele (132–145 bp vs. 165 bp, respectively). Moreover, size-selecting for shorter cell-free DNA fragment lengths substantially increased the EGFR T790M mutant allele frequency in human lung cancer. These findings provide compelling evidence that experimental or bioinformatic isolation of a specific subset of fragment lengths from cell-free DNA may improve detection of ctDNA. PMID:27428049
Process for fabricating continuous lengths of superconductor
Kroeger, Donald M.; List, III, Frederick A.
1998-01-01
A process for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor precursor between said first substrate ribbon and said second substrates ribbon. The layered superconductor precursor is then heat treated to form a super conductor layer.
Contact transfer length investigation of a 2D nanoparticle network by scanning probe microscopy.
Ruiz-Vargas, Carlos S; Reissner, Patrick A; Wagner, Tino; Wyss, Roman M; Park, Hyung Gyu; Stemmer, Andreas
2015-09-11
Nanoparticle network devices find growing application in sensing and electronics. One recurring challenge in the design and fabrication of this class of devices is ensuring a stable interface via robust yet unobstructive electrodes. A figure of merit which dictates the minimum electrode overlap required for optimal charge injection into the network is the contact transfer length. However, we find that traditional contact characterization using the transmission line model, an indirect method which requires extrapolation, is insufficient for network devices. Instead, we apply Kelvin probe force microscopy to characterize the contact resistance by imaging the surface potential with nanometer resolution. We then use scanning probe lithography to directly investigate the contact transfer length. We have determined the transfer length in graphene contacted devices to be 200-400 nm, thus apt for further device reduction which is often necessary for on-site sensing applications. Simulations from a two-dimensional resistor model support our observations and are expected to be an important tool for further optimizing the design of nanoparticle-based devices. PMID:26291069
King, A.W.
1986-01-01
Ecological models of the seasonal exchange of carbon dioxide between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. Two methods of extrapolation were tested. The first approach was a simple extrapolation that assumed relative within-biome homogeneity, and generated CO/sub 2/ source functions that differed dramatically from published estimates of CO/sub 2/ exchange. The differences were so great that the simple extrapolation was rejected as a means of incorporating site-specific models in a global CO/sub 2/ source function. The second extrapolation explicitly incorporated within-biome variability in the abiotic variables that drive seasonal biosphere-atmosphere CO/sub 2/ exchange. Simulated site-specific CO/sub 2/ dynamics were treated as a function of multiple random variables. The predicated regional CO/sub 2/ exchange is the computed expected value of simulated site-specific exchanges for that region times the area of the region. The test involved the regional extrapolation of tundra and a coniferous forest carbon exchange model. Comparisons between the CO/sub 2/ exchange estimated by extrapolation and published estimates of regional exchange for the latitude belt support the appropriateness of extrapolation by expected value.
Tracheoesophageal fistula length decreases over time.
Jiang, Nancy; Kearney, Ann; Damrose, Edward J
2016-07-01
The objectives of this study were to demonstrate that the length of the tracheoesophageal voice prosthesis changes over time and to determine whether the prosthesis length over time increased, decreased, or showed no predictable change in size. A retrospective chart review was performed at a tertiary care referral center. Patients who underwent either primary or secondary tracheoesophageal puncture between January 2006 and August 2014 were evaluated. Patients were excluded if the tracheoesophageal prosthesis size was not consistently recorded or if they required re-puncturing for an extruded prosthesis. Data analyzed included patient demographics and the length of the tracheoesophageal voice prosthesis at each change. A total of 37 patients were identified. The mean age was 64 years. Seventy-six percent were male. 24 % underwent primary tracheoesophageal puncture and 76 % underwent secondary tracheoesophageal puncture. The length of the prosthesis decreased over time (median Kendall correlation coefficient = -0.60; mean = -0.44) and this correlation between length and time was significant (p = 0.00085). Therefore, in conclusion, tracheoesophageal prosthesis length is not constant over time. The tracheoesophageal wall thins, necessitating placement of shorter prostheses over time. Patients with a tracheoesophageal voice prosthesis will require long-term follow-up and repeat sizing of their prosthesis. Successful tracheoesophageal voicing will require periodic reevaluation of these devices, and insurers must, therefore, understand that long-term professional care will be required to manage these patients and their prostheses. PMID:26951219
Dynamical Length-Regulation of Microtubules
NASA Astrophysics Data System (ADS)
Melbinger, Anna; Reese, Louis; Frey, Erwin
2012-02-01
Microtubules (MTs) are vital constituents of the cytoskeleton. These stiff filaments are not only needed for mechanical support. They also fulfill highly dynamic tasks. For instance MTs build the mitotic spindle, which pulls the doubled set of chromosomes apart during mitosis. Hence, a well-regulated and adjustable MT length is essential for cell division. Extending a recently introduced model [1], we here study length-regulation of MTs. Thereby we account for both spontaneous polymerization and depolymerization triggered by motor proteins. In contrast to the polymerization rate, the effective depolymerization rate depends on the presence of molecular motors at the tip and thereby on crowding effects which in turn depend on the MT length. We show that these antagonistic effects result in a well-defined MT length. Stochastic simulations and analytic calculations reveal the exact regimes where regulation is feasible. Furthermore, the adjusted MT length and the ensuing strength of fluctuations are analyzed. Taken together, we make quantitative predictions which can be tested experimentally. These results should help to obtain deeper insights in the microscopic mechanisms underlying length-regulation. [4pt] [1] L.Reese, A.Melbinger, E.Frey, Biophys. J., 101, 9, 2190 (2011)
Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.
1994-11-01
The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.
NASA Technical Reports Server (NTRS)
Furillo, F. T.; Purushothaman, S.; Tien, J. K.
1977-01-01
The Larson-Miller (L-M) method of extrapolating stress rupture and creep results is based on the contention that the absolute temperature-compensated time function should have a unique value for a given material. This value should depend only on the applied stress level. The L-M method has been found satisfactory in the case of many steels and superalloys. The derivation of the L-M relation is discussed, taking into account a power law creep relationship considered by Dorn (1965) and Barrett et al. (1964), a correlation expression reported by Garofalo et al. (1961), and relations concerning the constant C. Attention is given to a verification of the validity of the considered derivation with the aid of suitable materials.
Mayhall, Nicholas J; Raghavachari, Krishnan
2011-05-10
We present a new extrapolated fragment-based approach, termed molecules-in-molecules (MIM), for accurate energy calculations on large molecules. In this method, we use a multilevel partitioning approach coupled with electronic structure studies at multiple levels of theory to provide a hierarchical strategy for systematically improving the computed results. In particular, we use a generalized hybrid energy expression, similar in spirit to that in the popular ONIOM methodology, that can be combined easily with any fragmentation procedure. In the current work, we explore a MIM scheme which first partitions a molecule into nonoverlapping fragments and then recombines the interacting fragments to form overlapping subsystems. By including all interactions with a cheaper level of theory, the MIM approach is shown to significantly reduce the errors arising from a single level fragmentation procedure. We report the implementation of energies and gradients and the initial assessment of the MIM method using both biological and materials systems as test cases. PMID:26610128
Schwahofer, Andrea; Bär, Esther; Kuchenbecker, Stefan; Grossmann, J Günter; Kachelrieß, Marc; Sterzing, Florian
2015-12-01
Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al=2.7 g/cm(3)), titanium (ρ Ti=4.5 g/cm(3)), steel (ρ steel=7.9 g/cm(3)) and tungsten (ρ W=19.3g/cm(3)) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV(Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ=10 g/cm(3)) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the order of 10
NASA Astrophysics Data System (ADS)
Rong, Lu; Latychevskaia, Tatiana; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-07-01
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO2 pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hind wing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 {\\mu}m width cross veins.
Crofton, K M; Zhao, X
1997-07-01
Inhalation exposure to high concentrations of 1,1, 2-trichloroethylene (TCE) has been shown to damage hearing in the mid-frequency range in the rat. The present study directly evaluated the adequacy of high-concentration, short-term exposures to TCE for predicting the neurotoxicity produced by longer duration exposures. Adult male Long-Evans rats (n = 10-12 per group) were exposed to TCE via inhalation (whole body) in 1-m3 stainless steel flow-through chambers for 6 hr/day, 5 days/week. The following exposures were used: 1 day (4000-8000 ppm), 1 week (1000-4000 ppm), 4 weeks (800-3200 ppm), and 13 weeks (800-3200 ppm). Air-only exposed animals served as controls. Auditory thresholds were determined for a 16-kHz tone 3-5 weeks after exposure using reflex modification audiometry. Results replicated previous findings of a hearing loss at 16 kHz for all exposure durations. The dB15 concentrations (concentration that increases thresholds by 15 dB) for 16-kHz thresholds were 6218, 2992, 2592, and 2160 ppm for the 1-day, 1-week, 4-week and 13-week exposures, respectively. These data demonstrate that the ototoxicity of TCE was less than that predicted by a strict concentration x time relationship. These data also demonstrate that simple models of extrapolation (i.e., C x t = k, Haber's Law) overestimate the potency of TCE when extrapolating from short-duration to longer-duration exposures. Furthermore, these data suggest that, relative to ambient or occupational exposures, the ototoxicity of TCE in the rat is a high-concentration effect. PMID:9268609
Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J; Baker, Timothy R; Troutman, John A; Hewitt, Nicola J; Goebel, Carsten
2015-09-01
Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis-Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte Km and Vmax values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and Cmax was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. PMID:26028483
Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman
2014-01-01
Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data. PMID:24647349
Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.
2014-01-01
Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the
Gross anatomical study of spleenic length.
Chowdhury, Ashraful Islam; Khalil, Mansur; Begum, Jahan Ara; Rahman, M Habibur; Mannan, Sabina; Sultana, Seheli Zannat; Rahman, M Mahbubur; Ahamed, M Sshibbir; Sultana, Zinat Rezina
2009-01-01
The aim of the present study was to establish the standard length of the normal spleen in Bangladeshi people. One hundred and twenty human cadavers of which eighty-seven male and thirty-three female were dissected to remove spleen with associated structures in the morgue of Forensic Medicine Department of Mymensingh Medical College. Collected specimens were tagged with specific identification number, divided into five groups according to age and height of the individual. Gross and fine dissections were carried out after fixing the specimen in 10% formol saline solution. Length of the spleen was measured by measuring tape and expressed in cm and findings of the present study were compared with the findings of national and international studies. This was a cross sectional descriptive study carried out in the Department of Anatomy of Mymensingh Medical College, Mymensingh. The mean length of spleen was maximum as 11.20 cm in male in group C (31-45 years), and as 11.80 cm in female in group B(16-30 years) and mean length of spleen was minimum as 10.06 cm in male and 9.53 cm in female in group A (upto 15 years). Difference between group A and B, A and C, A and D were statistically significant. There were no significant differences in between other groups. According to height of individual the mean length of spleen was maximum 11.42 cm in 165.01 to 180 cm height group and minimum 10.30 cm in 0-120 cm height group which indicate that length of the spleen increases with height of the individual. This was observed that length of the spleen depends on the age, sex and body height of the individual. PMID:19377429
Insomnia and Telomere Length in Older Adults
Carroll, Judith E.; Esquivel, Stephanie; Goldberg, Alyssa; Seeman, Teresa E.; Effros, Rita B.; Dock, Jeffrey; Olmstead, Richard; Breen, Elizabeth C.; Irwin, Michael R.
2016-01-01
Study Objectives: Insomnia, particularly in later life, may raise the risk for chronic diseases of aging and mortality through its effect on cellular aging. The current study examines the effects of insomnia on telomere length, a measure of cellular aging, and tests whether insomnia interacts with chronological age to increase cellular aging. Methods: A total of 126 males and females (60–88 y) were assessed for insomnia using the Diagnostic and Statistical Manual IV criterion for primary insomnia and the International Classification of Sleep Disorders, Second Edition for general insomnia (45 insomnia cases; 81 controls). Telomere length in peripheral blood mononuclear cells (PBMC) was determined using real-time quantitative polymerase chain reaction (qPCR) methodology. Results: In the analysis of covariance model adjusting for body mass index and sex, age (60–69 y versus 70–88 y) and insomnia diagnosis interacted to predict shorter PBMC telomere length (P = 0.04). In the oldest age group (70–88 y), PBMC telomere length was significantly shorter in those with insomnia, mean (standard deviation) M(SD) = 0.59(0.2) compared to controls with no insomnia M(SD) = 0.78(0.4), P = 0.04. In the adults aged 60–69 y, PBMC telomere length was not different between insomnia cases and controls, P = 0.44. Conclusions: Insomnia is associated with shorter PBMC telomere length in adults aged 70–88 y, but not in those younger than 70 y, suggesting that clinically severe sleep disturbances may increase cellular aging, especially in the later years of life. These findings highlight insomnia as a vulnerability factor in later life, with implications for risk for diseases of aging. Citation: Carroll JE, Esquivel S, Goldberg A, Seeman TE, Effros RB, Dock J, Olmstead R, Breen EC, Irwin MR. Insomnia and telomere length in older adults. SLEEP 2016;39(3):559–564. PMID:26715231
Burkhard, Lawrence P; Cook, Philip M; Lukasewycz, Marta T
2006-07-01
An approach is presented for extrapolating field-measured biota-sediment accumulation factors (BSAFs) and bioaccumulation factors (BAFs) across species, time, and/or ecosystems. This approach, called the hybrid bioaccumulation modeling approach, uses mechanistic bioaccumulation models to extrapolate field-measured bioaccumulation data (i.e., BSAFs and BAFs) to new sets of ecological conditions. The hybrid approach predicts relative differences in bioaccumulation using food web models with two sets of ecological conditions and parameters: One set for the ecosystem where the BSAFs and/or BAFs were measured, and the other set for the ecological conditions and parameters for which the extrapolated BSAFs and/or BAFs are desired. The field-measured BSAF (or BAF) is extrapolated by adjusting the measured BSAF (or BAF) by the predicted relative difference, which is derived from two separate solutions of the food web model. Extrapolations of polychlorinated biphenyl BSAFs and BAFs for lake trout (Salvelinus namaycush) from southern Lake Michigan to Green Bay of Lake Michigan (Green Bay, WI, USA) walleye (Stizostedion vitreum) and brown trout (Salmo trutta), as well as Hudson River largemouth bass (Micropterus salmoides) and yellow perch (Perca flavescens), resulted in generally better agreement between measured and predicted BSAFs and BAFs with the hybrid approach. PMID:16833159
Correlation between the forearm plus little finger length and the femoral length.
Naik, Monappa A; Sujir, Premjit; Tripathy, Sujit Kumar; Goyal, Tarun; Rao, Sharath K
2013-08-01
PURPOSE. To assess the correlation between the forearm plus little finger length and the femoral length in 100 volunteers. METHODS. The forearm plus little finger length and the ipsilateral femoral length of 68 male and 32 female volunteers aged 19 to 55 (mean, 35.8) years were measured using a measuring tape. The forearm plus litter finger length was measured from the tip of the olecranon to the tip of the little finger, whereas the femoral length was measured from the tip of the greater trochanter to the level of proximal pole of the patella over the outer aspect of thigh. Two observers made the measurements on 2 separate occasions. Intra- and inter-observer variations were calculated. A value of 0.75 or greater indicated excellent agreement. RESULTS. The mean forearm plus little finger length and femoral length were 39.87 (SD, 2.73) and 39.85 (SD, 2.44) cm, respectively. The mean difference between these 2 measurements was 0.028 (95% CI, -0.109 to 0.165) cm. The correlation between these 2 measurements was 0.861 (p<0.001). Patient age, sex, and body mass index did not affect this correlation. The intra- and inter-observer reliability was excellent. CONCLUSION. The forearm plus little finger length correlated with the femoral length. This method is simple, radiation-free, and can be applied in day-today practice. PMID:24014776
Delayed Feedback Model of Axonal Length Sensing
Karamched, Bhargav R.; Bressloff, Paul C.
2015-01-01
A fundamental question in cell biology is how the sizes of cells and organelles are regulated at various stages of development. Size homeostasis is particularly challenging for neurons, whose axons can extend from hundreds of microns to meters (in humans). Recently, a molecular-motor-based mechanism for axonal length sensing has been proposed, in which axonal length is encoded by the frequency of an oscillating retrograde signal. In this article, we develop a mathematical model of this length-sensing mechanism in which advection-diffusion equations for bidirectional motor transport are coupled to a chemical signaling network. We show that chemical oscillations emerge due to delayed negative feedback via a Hopf bifurcation, resulting in a frequency that is a monotonically decreasing function of axonal length. Knockdown of either kinesin or dynein causes an increase in the oscillation frequency, suggesting that the length-sensing mechanism would produce longer axons, which is consistent with experimental findings. One major prediction of the model is that fluctuations in the transport of molecular motors lead to a reduction in the reliability of the frequency-encoding mechanism for long axons. PMID:25954897
Chromosome-length polymorphism in fungi.
Zolan, M E
1995-01-01
The examination of fungal chromosomes by pulsed-field gel electrophoresis has revealed that length polymorphism is widespread in both sexual and asexual species. This review summarizes characteristics of fungal chromosome-length polymorphism and possible mitotic and meiotic mechanisms of chromosome length change. Most fungal chromosome-length polymorphisms are currently uncharacterized with respect to content and origin. However, it is clear that long tandem repeats, such as tracts of rRNA genes, are frequently variable in length and that other chromosomal rearrangements are suppressed during normal mitotic growth. Dispensable chromosomes and dispensable chromosome regions, which have been well documented for some fungi, also contribute to the variability of the fungal karyotype. For sexual species, meiotic recombination increases the overall karyotypic variability in a population while suppressing genetic translocations. The range of karyotypes observed in fungi indicates that many karyotypic changes may be genetically neutral, at least under some conditions. In addition, new linkage combinations of genes may also be advantageous in allowing adaptation of fungi to new environments. PMID:8531892
Tactile length contraction as Bayesian inference.
Tong, Jonathan; Ngo, Vy; Goldreich, Daniel
2016-08-01
To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process. PMID:27121574
Influence of mandibular length on mouth opening.
Dijkstra, P U; Hof, A L; Stegenga, B; de Bont, L G
1999-02-01
Theoretically, mouth opening not only reflects the mobility of the temporomandibular joints (TMJs) but also the mandibular length. Clinically, the exact relationship between mouth opening, mandibular length, and mobility of TMJs is unclear. To study this relationship 91 healthy subjects, 59 women and 32 men (mean age 27.2 years, s.d. 7.5 years, range 13-56 years) were recruited from the patients of the Department of Oral and Maxillofacial Surgery of University Hospital, Groningen. Mouth opening, mobility of TMJs and mandibular length were measured. The mobility of TMJs was measured as the angular displacement of the mandible relative to the cranium, the angle of mouth opening (AMO). Mouth opening (MO) correlated significantly with mandibular length (ML) (r = 0.36) and AMO (r = 0.66). The regression equation MO = C1 x ML x AMO + C2, in which C = 0.53 and C2 = 25.2 mm, correlated well (r = 0.79) with mouth opening. It is concluded that mouth opening reflects both mobility of the TMJs and mandibular length. PMID:10080308
The evolution mechanism of intron length.
Zhang, Qiang; Li, Hong; Zhao, Xiao-Qing; Xue, Hui; Zheng, Yan; Meng, Hu; Jia, Yun; Bo, Su-Ling
2016-08-01
Within two years of their discovery in 1977, introns were found to have a positive effect on gene expression. Our result shows that introns can achieve gene expression and regulation through interaction with corresponding mRNA sequences. On the base of Smith-Waterman method, local comparing helps us get the optimal matched segments between intron sequences and mRNA sequences. Studying the distribution regulation of the optimal matching region on intron sequences of ribosomal protein genes about 27 species, we find that the intron length evolution processes beginning from 5' end to 3' end and increasing one by one structural unit, which comes up with a possible mechanism for the intron length evolution. The intron of structure units is conservative with about 60bp length, but the length of linker sequence between structure units changes a lot. Interestingly, distributions of the length and matching rate of optimal matched segments are consistent with sequence features of miRNA and siRNA. These results indicate that the interaction between intron sequences and mRNA sequences is a kind of functional RNA-RNA interaction. Meanwhile, the two kinds of sequences above are co-evolved and interactive to play their functions. PMID:27449197
Functional scoliosis caused by leg length discrepancy
Daniszewska, Barbara; Zolynski, Krystian
2010-01-01
Introduction Leg length discrepancy (LLD) causes pelvic obliquity in the frontal plane and lumbar scoliosis with convexity towards the shorter extremity. Leg length discrepancy is observed in 3-15% of the population. Unequalized lower limb length discrepancy leads to posture deformation, gait asymmetry, low back pain and discopathy. Material and methods In the years 1998-2006, 369 children, aged 5 to 17 years (209 girls, 160 boys) with LLD-related functional scoliosis were treated. An external or internal shoe lift was applied. Results Among 369 children the discrepancy of 0.5 cm was observed in 27, 1 cm in 329, 1.5 cm in 9 and 2 cm in 4 children. During the first follow-up examination, within 2 weeks, the adjustment of the spine to new static conditions was noted and correction of the curve in 316 examined children (83.7%). In 53 children (14.7%) the correction was observed later and was accompanied by slight low back pain. The time needed for real equalization of limbs was 3 to 24 months. The time needed for real equalization of the discrepancy was 11.3 months. Conclusions Leg length discrepancy equalization results in elimination of scoliosis. Leg length discrepancy < 2 cm is a static disorder; that is why measurements should be performed in a standing position using blocks of adequate thickness and the position of the posterior superior iliac spine should be estimated. PMID:22371777
Altered Maxwell equations in the length gauge
NASA Astrophysics Data System (ADS)
Reiss, H. R.
2013-09-01
The length gauge uses a scalar potential to describe a laser field, thus treating it as a longitudinal field rather than as a transverse field. This distinction is manifested by the fact that the Maxwell equations that relate to the length gauge are not the same as those for transverse fields. In particular, a source term is necessary in the length-gauge Maxwell equations, whereas the Coulomb-gauge description of plane waves possesses the basic property of transverse fields that they propagate with no source terms at all. This difference is shown to be importantly consequential in some previously unremarked circumstances; and it explains why the Göppert-Mayer gauge transformation does not provide the security that might be expected of full gauge equivalence.
Resonance effects in neutron scattering lengths
Lynn, J.E.
1989-06-01
The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-/angstrom/ wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs.
Correlation length for interplanetary magnetic field fluctuations.
NASA Technical Reports Server (NTRS)
Fisk, L. A.; Sari, J. W.
1973-01-01
It is argued that it is necessary to consider two correlation lengths for interplanetary magnetic field fluctuations. For particles with gyroradii large enough to encounter and be scattered by large-scale tangential discontinuities in the field (particles with energies of above several GeV/nucleon) the appropriate correlation length is simply the mean spatial separation between the discontinuities. Particles with gyroradii much less than this mean separation appear to be unaffected by the discontinuities and respond only to smaller-scale field fluctuations. With this system of two correlation lengths the cosmic ray diffusion tensor may be altered from what was predicted by, for example, Jokipii and Coleman, and the objections raised recently by Klimas and Sandri to the diffusion analysis of Jokipii may apply only at relatively low energies (about 50 MeV/nucleon).
Force and Length in the Mitotic Spindle
Dumont, Sophie; Mitchison, Timothy J.
2009-01-01
The mitotic spindle assembles to a steady-state length at metaphase through the integrated action of molecular mechanisms that generate and respond to mechanical forces. While molecular mechanisms that produce force have been described, our understanding of how they integrate with each other, and with the assembly-disassembly mechanisms that regulate length, is poor. We review current understanding of the basic architecture and dynamics of the metaphase spindle, and some of the elementary force producing mechanisms. We then discuss models for force integration, and spindle length determination. We also emphasize key missing data that notably includes absolute values of forces, and how they vary as a function of position, within the spindle. PMID:19906577
Length, protein protein interactions, and complexity
NASA Astrophysics Data System (ADS)
Tan, Taison; Frenkel, Daan; Gupta, Vishal; Deem, Michael W.
2005-05-01
The evolutionary reason for the increase in gene length from archaea to prokaryotes to eukaryotes observed in large-scale genome sequencing efforts has been unclear. We propose here that the increasing complexity of protein-protein interactions has driven the selection of longer proteins, as they are more able to distinguish among a larger number of distinct interactions due to their greater average surface area. Annotated protein sequences available from the SWISS-PROT database were analyzed for 13 eukaryotes, eight bacteria, and two archaea species. The number of subcellular locations to which each protein is associated is used as a measure of the number of interactions to which a protein participates. Two databases of yeast protein-protein interactions were used as another measure of the number of interactions to which each S. cerevisiae protein participates. Protein length is shown to correlate with both number of subcellular locations to which a protein is associated and number of interactions as measured by yeast two-hybrid experiments. Protein length is also shown to correlate with the probability that the protein is encoded by an essential gene. Interestingly, average protein length and number of subcellular locations are not significantly different between all human proteins and protein targets of known, marketed drugs. Increased protein length appears to be a significant mechanism by which the increasing complexity of protein-protein interaction networks is accommodated within the natural evolution of species. Consideration of protein length may be a valuable tool in drug design, one that predicts different strategies for inhibiting interactions in aberrant and normal pathways.
Telomerase Activity and Telomere Length in Daphnia
Schumpert, Charles; Nelson, Jacob; Kim, Eunsuk; Dudycha, Jeffry L.; Patel, Rekha C.
2015-01-01
Telomeres, comprised of short repetitive sequences, are essential for genome stability and have been studied in relation to cellular senescence and aging. Telomerase, the enzyme that adds telomeric repeats to chromosome ends, is essential for maintaining the overall telomere length. A lack of telomerase activity in mammalian somatic cells results in progressive shortening of telomeres with each cellular replication event. Mammals exhibit high rates of cell proliferation during embryonic and juvenile stages but very little somatic cell proliferation occurs during adult and senescent stages. The telomere hypothesis of cellular aging states that telomeres serve as an internal mitotic clock and telomere length erosion leads to cellular senescence and eventual cell death. In this report, we have examined telomerase activity, processivity, and telomere length in Daphnia, an organism that grows continuously throughout its life. Similar to insects, Daphnia telomeric repeat sequence was determined to be TTAGG and telomerase products with five-nucleotide periodicity were generated in the telomerase activity assay. We investigated telomerase function and telomere lengths in two closely related ecotypes of Daphnia with divergent lifespans, short-lived D. pulex and long-lived D. pulicaria. Our results indicate that there is no age-dependent decline in telomere length, telomerase activity, or processivity in short-lived D. pulex. On the contrary, a significant age dependent decline in telomere length, telomerase activity and processivity is observed during life span in long-lived D. pulicaria. While providing the first report on characterization of Daphnia telomeres and telomerase activity, our results also indicate that mechanisms other than telomere shortening may be responsible for the strikingly short life span of D. pulex. PMID:25962144
Telomerase activity and telomere length in Daphnia.
Schumpert, Charles; Nelson, Jacob; Kim, Eunsuk; Dudycha, Jeffry L; Patel, Rekha C
2015-01-01
Telomeres, comprised of short repetitive sequences, are essential for genome stability and have been studied in relation to cellular senescence and aging. Telomerase, the enzyme that adds telomeric repeats to chromosome ends, is essential for maintaining the overall telomere length. A lack of telomerase activity in mammalian somatic cells results in progressive shortening of telomeres with each cellular replication event. Mammals exhibit high rates of cell proliferation during embryonic and juvenile stages but very little somatic cell proliferation occurs during adult and senescent stages. The telomere hypothesis of cellular aging states that telomeres serve as an internal mitotic clock and telomere length erosion leads to cellular senescence and eventual cell death. In this report, we have examined telomerase activity, processivity, and telomere length in Daphnia, an organism that grows continuously throughout its life. Similar to insects, Daphnia telomeric repeat sequence was determined to be TTAGG and telomerase products with five-nucleotide periodicity were generated in the telomerase activity assay. We investigated telomerase function and telomere lengths in two closely related ecotypes of Daphnia with divergent lifespans, short-lived D. pulex and long-lived D. pulicaria. Our results indicate that there is no age-dependent decline in telomere length, telomerase activity, or processivity in short-lived D. pulex. On the contrary, a significant age dependent decline in telomere length, telomerase activity and processivity is observed during life span in long-lived D. pulicaria. While providing the first report on characterization of Daphnia telomeres and telomerase activity, our results also indicate that mechanisms other than telomere shortening may be responsible for the strikingly short life span of D. pulex. PMID:25962144
How Cells Measure Length on Subcellular Scales.
Marshall, Wallace F
2015-12-01
Cells are not just amorphous bags of enzymes, but precise and complex machines. With any machine, it is important that the parts be of the right size, yet our understanding of the mechanisms that control size of cellular structures remains at a rudimentary level in most cases. One problem with studying size control is that many cellular organelles have complex 3D structures that make their size hard to measure. Here we focus on linear structures within cells, for which the problem of size control reduces to the problem of length control. We compare and contrast potential mechanisms for length control to understand how cells solve simple geometry problems. PMID:26437596
The minimal length and quantum partition functions
NASA Astrophysics Data System (ADS)
Abbasiyan-Motlaq, M.; Pedram, P.
2014-08-01
We study the thermodynamics of various physical systems in the framework of the generalized uncertainty principle that implies a minimal length uncertainty proportional to the Planck length. We present a general scheme to analytically calculate the quantum partition function of the physical systems to first order of the deformation parameter based on the behavior of the modified energy spectrum and compare our results with the classical approach. Also, we find the modified internal energy and heat capacity of the systems for the anti-Snyder framework.
Apparatus for fabricating continuous lengths of superconductor
Kroeger, Donald M.; List, III, Frederick A.
2002-01-01
A process and apparatus for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor comprising a layer of said superconducting precursor powder between said first substrate ribbon and said second substrates ribbon. The layered superconductor is then heat treated to establish the superconducting phase of said superconductor precursor powder.
Apparatus for fabricating continuous lengths of superconductor
Kroeger, Donald M.; List, III, Frederick A.
2001-01-01
A process and apparatus for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor comprising a layer of said superconducting precursor powder between said first substrate ribbon and said second substrates ribbon. The layered superconductor is then heat treated to establish the superconducting phase of said superconductor precursor powder.
Sighting optics including an optical element having a first focal length and a second focal length
Crandall, David Lynn
2011-08-01
One embodiment of sighting optics according to the teachings provided herein may include a front sight and a rear sight positioned in spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus, for a user, images of the front sight and the target.
Gulliver, John; de Hoogh, Kees; Hoek, Gerard; Vienneau, Danielle; Fecht, Daniela; Hansell, Anna
2016-01-01
Robust methods to estimate historic population air pollution exposures are important tools for epidemiological studies evaluating long-term health effects. We developed land use regression (LUR) models for NO2 exposure in Great Britain for 1991 and explored whether the choice of year-specific or back-extrapolated LUR yields 1) similar LUR variables and model performance, and 2) similar national and regional address-level and small-area concentrations. We constructed two LUR models for 1991using NO2 concentrations from the diffusion tube monitoring network, one using 75% of all available measurement sites (that over-represent industrial areas), and the other using 75% of a subset of sites proportionate to population by region to study the effects of monitoring site selection bias. We compared, using the remaining (hold-out) 25% of monitoring sites, the performance of the two 1991 models with back-extrapolation of a previously published 2009 model, developed using NO2 concentrations from automatic chemiluminescence monitoring sites and predictor variables from 2006/2007. The 2009 model was back-extrapolated to 1991 using the same predictors (1990 & 1995) used to develop 1991 models. The 1991 models included industrial land use variables, not present for 2009. The hold-out performance of 1991 models (mean-squared-error-based-R(2): 0.62-0.64) was up to 8% higher and ~1μg/m(3) lower in root mean squared error than the back-extrapolated 2009 model, with best performance from the subset of sites representing population exposures. Year-specific and back-extrapolated exposures for residential addresses (n=1.338,399) and small areas (n=10.518) were very highly linearly correlated for Great Britain (r>0.83). This study suggests that year-specific model for 1991 and back-extrapolation of the 2009 LUR yield similar exposure assessment. PMID:27107225
NASA Technical Reports Server (NTRS)
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Ford, William Paul; van Orden, Wally
2013-11-25
In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.
Ford, William Paul; van Orden, Wally
2013-11-25
In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.
J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field
NASA Technical Reports Server (NTRS)
Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.
1991-01-01
Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The
Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J.; Baker, Timothy R.; Troutman, John A.; Hewitt, Nicola J.; Goebel, Carsten
2015-09-01
Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human
Park, Jae Young; Lim, Gina; Oh, Ki Won; Ryu, Dong Soo; Park, Seonghun; Jeon, Jong Chul; Cheon, Sang Hyeon; Moon, Kyung Hyun; Park, Sejun
2015-01-01
Purpose Anogential distance (AGD) and the 2:4 digit length ratio appear to provide a reliable guide to fetal androgen exposure. We intended to investigate the current status of penile size and the relationship between penile length and AGD or digit length according to birth weight in Korean newborn infants. Materials and Methods Between May 2013 and February 2014, among a total of 78 newborn male infants, 55 infants were prospectively included in this study. Newborn male infants with a gestational age of 38 to 42 weeks and birth weight>2.5 kg were assigned to the NW group (n=24) and those with a gestational age<38 weeks and birth weight<2.5 kg were assigned to the LW group (n=31). Penile size and other variables were compared between the two groups. Results Stretched penile length of the NW group was 3.3±0.2 cm, which did not differ significantly from that reported in 1987. All parameters including height, weight, penile length, testicular size, AGD, and digit length were significantly lower in the LW group than in the NW group. However, there were no significant differences in AGD ratio or 2:4 digit length ratio between the two groups. Conclusions The penile length of newborn infants has not changed over the last quarter century in Korea. With normal penile appearance, the AGD ratio and 2:4 digit length ratio are consistent irrespective of birth weight, whereas AGD, digit length, and penile length are significantly smaller in newborns with low birth weight. PMID:25763130
The Varying Lengths of Solar Days.
ERIC Educational Resources Information Center
Brimhall, James
1996-01-01
Describes a stimulating interactive group project on measuring the lengths of solar days that directly supports concepts related to orbits, motion, and time in a physics or astronomy curriculum, and has the flexibility to extend from a few days to several weeks. Provides new measurement experiences for students and teachers and opportunities for…
A survey on intron and exon lengths.
Hawkins, J D
1988-01-01
The lengths of introns and exons in various parts of genes of vertebrates, insects, plants and fungi are tabulated. Differences between the various groups of organisms are apparent. The results are discussed and support the idea that, generally speaking, introns were present in primitive genomes, though in some cases they may have been inserted into pre-existing genes. PMID:3057449
Information-theoretic lengths of Jacobi polynomials
NASA Astrophysics Data System (ADS)
Guerrero, A.; Sánchez-Moreno, P.; Dehesa, J. S.
2010-07-01
The information-theoretic lengths of the Jacobi polynomials P(α, β)n(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters (α, β). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.
Hydrodynamic slip length as a surface property.
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G P
2016-02-01
Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems. PMID:26986407
Hydrodynamic slip length as a surface property
NASA Astrophysics Data System (ADS)
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2016-02-01
Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems.
Report of the magnet length workshop
1985-12-31
A meeting was held at the Central Design Group (CDG), to discuss magnet length and to recommend a length for the planned Conceptual Design Report (CDR) as well as for magnet R and D. This report is a summary of the findings. Included is the letter from C. Taylor, CDG, convening the meeting, the proposed agenda, a summary of the results, and an appendix containing information presented at the meeting. The discussion mainly centered around 4, 5, and 6 dipoles per (100 m) half-cell. The magnetic lengths are approximately 16.6 m per dipole (the ROS length as well as that of the first R and D magnet now under construction), 20.75 m for four dipoles per half-cell, and 13.8 m for six dipoles per half-cell. Cost estimates are given. The apparent cost advantage of the longer units could be partially offset if the aperture can be adjusted to take advantage of a more uniform average magnetic field that could be realized by sorting. This sorting can be more effective with 20% more (shorter) magnets in the machine.
PHOTOBIOLOGY IMPACT ON COTTON FIBER LENGTH
Technology Transfer Automated Retrieval System (TEKTRAN)
Cotton (Gossypium hirsutum L.) fibers are single elongated cells that extend from the seed coat during development, and fiber length is important to textile quality. It was hypothesized that elongating cotton fibers would be as responsive to far-red light (FR) as elongating cells in seedling hypocot...
Using Similarity to Find Length and Area.
ERIC Educational Resources Information Center
Sandefur, James T.
1994-01-01
Shows a way in which algebra and geometry can be used together to find the lengths and areas of spirals. This method develops better understanding of shapes, similarity, and mathematical connections in students. Discusses spirals embedded in triangles and squares, the Pythagorean theorem, and the area of regular polygons. (MKR)
Fall Colors, Temperature, and Day Length
ERIC Educational Resources Information Center
Burton, Stephen; Miller, Heather; Roossinck, Carrie
2007-01-01
Along with the bright hues of orange, red, and yellow, the season of fall represents significant changes, such as day length and temperature. These changes provide excellent opportunities for students to use science process skills to examine how abiotic factors such as weather and temperature impact organisms. In this article, the authors describe…
Superintendent Length of Tenure and Student Achievement
ERIC Educational Resources Information Center
Myers, Scott
2011-01-01
This quantitative study, utilizing the backward method of multiple regression, examined the relationship between the length of tenure of a superintendent and academic achievement as defined by the percentage of students who scored "Proficient" or better on the 2008 Third Grade Kansas Reading Assessment. To put this relationship into…
Quark screening lengths in finite temperature QCD
Gocksch, A. California Univ., Santa Barbara, CA . Inst. for Theoretical Physics)
1990-11-01
We have computed Landau gauge quark propagators in both the confined and deconfined phase of QCD. I discuss the magnitude of the resulting screening lengths as well as aspects of chiral symmetry relevant to the quark propagator. 12 refs., 1 fig., 1 tab.
Optimality Of Variable-Length Codes
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.
1994-01-01
Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.
Bunch length measurements using synchrotron ligth monitor
Ahmad, Mahmoud; Tiefenback, Michael G.
2015-09-01
The bunch length is measured at CEBAF using an invasive technique. The technique depends on applying an energy chirp for the electron bunch and imaging it through a dispersive region. The measurements are taken through Arc1 and Arc2 at CEBAF. The fundamental equations, procedure and the latest results are given.
The persistence length of adsorbed dendronized polymers.
Grebikova, Lucie; Kozhuharov, Svilen; Maroni, Plinio; Mikhaylov, Andrey; Dietler, Giovanni; Schlüter, A Dieter; Ullner, Magnus; Borkovec, Michal
2016-07-21
The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth generation polymer adsorbed on mica, which is a hydrophilic and highly charged substrate. However, the observed dependence on the ionic strength is much weaker than the one predicted by the Odijk, Skolnik, and Fixman (OSF) theory for semi-flexible chains. Low-generation polymers show a variation with the ionic strength that resembles the one observed for simple and flexible polyelectrolytes in solution. For high-generation polymers, this dependence is weaker. Similar dependencies are found for silica and gold substrates. The observed behavior is probably caused by different extents of screening of the charged groups, which is modified by the polymer generation, and to a lesser extent, the nature of the substrate. For highly ordered pyrolytic graphite (HOPG), which is a hydrophobic and weakly charged substrate, the electrostatic contribution to the persistence length is much smaller. In the latter case, we suspect that specific interactions between the polymer and the substrate also play an important role. PMID:27353115
Exploring Segment Lengths on the Geoboard
ERIC Educational Resources Information Center
Ellis, Mark W.; Pagni, David
2008-01-01
Given a 5-peg by 5-peg geoboard, how many different lengths can be made by stretching a rubber band to form an oblique segment between any two pegs? This investigation requires students to make connections to the Pythagorean theorem, congruence, and combinations. With its use of visual representation and a range of mathematical ideas that can be…
Telomere length in human liver diseases.
Urabe, Y; Nouso, K; Higashi, T; Nakatsukasa, H; Hino, N; Ashida, K; Kinugasa, N; Yoshida, K; Uematsu, S; Tsuji, T
1996-10-01
To determine the role of telomere-mediated gene stability in hepatocarcinogenesis, we examined the telomere length of human liver with or without chronic liver diseases and hepatocellular carcinomas (HCC). The mean telomere restriction fragment (TRF) length of normal liver (n = 13), chronic hepatitis (n = 11), liver cirrhosis (n = 24) and HCC (n = 24) was 7.8 +/- 0.2, 7.1 +/- 0.3, 6.4 +/- 0.2 and 5.2 +/- 0.2 kb, respectively (mean +/- standard error). TRF length decreased with a progression of chronic liver diseases and that in HCC was significantly shorter than that in other chronic liver diseases (p < 0.05). The ratios of TRF length of HCC to that of corresponding surrounding liver of well differentiated (n = 7), moderately differentiated (n = 10) and poorly differentiated (n = 4) HCCs were 0.83 +/- 0.06, 0.75 +/- 0.05 and 0.98 +/- 0.09, respectively. The ratio of poorly differentiated HCC was significantly higher than that of moderately differentiated HCC (p < 0.05). A comparison between the size and telomere length ratio of moderately differentiated HCCs revealed a decrease of the ratio with size until it reached 50 mm in diameter. In contrast, the ratio increased as the size enlarged over 50 mm. These findings suggest that the gene stability of the liver cells mediated by the telomere is reduced as chronic liver disease progresses and that telomerase is activated in poorly differentiated HCC and moderately differentiated HCC over 50 mm in diameter. PMID:8938628
ERIC Educational Resources Information Center
Passman, Roger
This paper grew out of the collaborative relationship that emerged from in-class modeling of student-centered writing approaches as participating teachers and a consultant/researcher began to explore ways to increase the length of fourth-grade writing. The paper reports on a small study in fourth-grade writing aimed at increasing the length of…
Extrapolation of IAPWS-IF97 data: The saturation pressure of H2O in the critical region
NASA Astrophysics Data System (ADS)
Ustyuzhanin, E. E.; Ochkov, V. F.; Shishakov, V. V.; Rykov, A. V.
2015-11-01
Some literature sources and web sites are analyzed in this report. These sources contain an information about thermophysical properties of H2O including the vapor pressure Ps. (Ps,T)-data have a form of the international standard tables named as “IAPWS-IF97 data”. Our analysis shows that traditional databases represent (Ps,T)-data at t > 0.002, here t = (Tc - T)/Tc is a reduced temperature. It is an interesting task to extrapolate IAPWS-IF97 data in to the critical region and to get (Ps,T)-data at t < 0.002. We have considered some equations Ps(t) and estimated that previous models do not follow to the degree laws of the scaling theory (ST). A combined model (CM) is chosen as a form, F(t,D,B), to express a function ln(Ps/Pc) in the critical region including t < 0.002, here D = (α, Pc,Tc,...) are critical characteristics, B are adjustable coefficients. CM has a combined structure with scaling and regular parts. The degree laws of ST are taken into account to elaborate F(t, D, B). Adjustable coefficients (B) are determined by fitting CM to input (Ps,T)-points those belong to IAPWS-IF97 data. Application results are got with a help of CM in the critical region including values of the first and the second derivatives for Ps(T). Some models Ps(T) are compared with CM.
NASA Technical Reports Server (NTRS)
Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.
2001-01-01
The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.
Oyeyemi, Victor B.; Pavone, Michele; Carter, Emily A.
2011-11-03
Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: (1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; (2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and (3) DFT-B3LYP calculations of minimumenergy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of C*C and C*H bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules.
Maingi, R
2014-07-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
NASA Astrophysics Data System (ADS)
Tassis, Konstantinos; Pavlidou, Vasiliki
2015-07-01
Recent Planck results have shown that radiation from the cosmic microwave background passes through foregrounds in which aligned dust grains produce polarized dust emission, even in regions of the sky with the lowest level of dust emission. One of the most commonly used ways to remove the dust foreground is to extrapolate the polarized dust emission signal from frequencies where it dominates (e.g. ˜350 GHz) to frequencies commonly targeted by cosmic microwave background experiments (e.g. ˜150 GHz). In this Letter, we describe an interstellar medium effect that can lead to decorrelation of the dust emission polarization pattern between different frequencies due to multiple contributions along the line of sight. Using a simple 2-cloud model we show that there are two conditions under which this decorrelation can be large: (a) the ratio of polarized intensities between the two clouds changes between the two frequencies; (b) the magnetic fields between the two clouds contributing along a line of sight are significantly misaligned. In such cases, the 350 GHz polarized sky map is not predictive of that at 150 GHz. We propose a possible correction for this effect, using information from optopolarimetric surveys of dichroicly absorbed starlight.
Campbell, Jerry L; Yoon, Miyoung; Clewell, Harvey J
2015-06-01
Parabens have been reported as potential endocrine disrupters and are widely used in consumer projects including cosmetics, foods and pharmaceuticals. We report on the development of a PBPK model for methyl-, propyl-, and butylparaben. The model was parameterized through a combination of QSAR for tissue solubility and quantitative in vitro to in vivo extrapolation (IVIVE) for hydrolysis in portals of entry including intestine and skin as well as in the primary site of metabolism, the liver. Overall, the model provided very good agreement with published time-course data in blood and urine from controlled dosing studies in rat and human, and demonstrates the potential value of quantitative IVIVE in expanding the use of human biomonitoring data in safety assessment. An in vitro based cumulative margin of safety (MOS) was calculated by comparing the effective concentrations from an in vitro assay of estrogenicity to the free paraben concentrations predicted by the model to be associated with the 95th percentile urine concentrations reported in NHANES (2009-2010 collection period). The calculated MOS for adult females was 108, whereas the MOS for males was 444. PMID:25839974
Gillen, K.T.; Wise, J.; Celina, M.; Clough, R.L.
1997-09-01
Because of the need to significantly extend the lifetimes of weapons, and because of potential implications of environmental O-ring failure on degradation of critical internal weapon components, the authors have been working on improved methods of predicting and verifying O-ring lifetimes. In this report, they highlight the successful testing of a new predictive method for deriving more confident lifetime extrapolations. This method involves ultrasensitive oxygen consumption measurements. The material studied is an EPDM formulation use for the environmental O-ring the W88. Conventional oven aging (155 C to 111 C) was done on compression molded sheet material; periodically, samples were removed from the ovens and subjected to various measurements, including ultimate tensile elongation, density and modulus profiles. Compression stress relaxation (CSR) measurements were made at 125 C and 111 C on disc shaped samples (12.7 mm diameter by 6 mm thick) using a Shawbury Wallace Compression Stress Relaxometer MK 2. Oxygen consumption measurements were made versus time, at temperatures ranging from 160 C to 52 C, using chromatographic quantification of the change in oxygen content caused by reaction with the EPDM material in sealed containers.
NASA Technical Reports Server (NTRS)
Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.
1992-01-01
Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of greater than 500 nmol/m(sup -2)h(sup -1) occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of approximately 60 nmol/m(sup -2)h(sup -1) which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat thematic mapper were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emission were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities (42 percent) accounted for approximately 24 percent of the total S emissions.
Including higher energy data in the R-matrix extrapolation of 12C(α , γ) 16O
NASA Astrophysics Data System (ADS)
Deboer, R.; Uberseder, E.; Azuma, R. E.; Best, A.; Brune, C.; Goerres, J.; Sayre, D.; Smith, K.; Wiescher, M.
2015-10-01
The phenomenological R-matrix technique has proved to be very successful in describing the cross sections of interest to nuclear astrophysics. One of the key reactions is 12C(α , γ) 16O, which has frequently been analyzed using R-matrix but usually over a limited energy range. This talk will present an analysis that, for the first time, extends above the proton and α1 separation energies taking advantage of a large amount of additional data. The analysis uses the new publicly released JINA R-matrix code AZURE2. The traditional reaction channels of 12C(α , γ) 16O, 12C(α ,α0) 12, and 16N(βα) 12C are included but are now accompanied by the higher energy reactions. By explicitly including higher energy levels, the uncertainty in the extrapolation of the cross section is significantly reduced. This is accomplished by more stringent constraints on interference combination and background poles by the additional higher energy data and by considering new information about subthresold states from transfer reactions. The result is the most comprehensive R-matrix analysis of the 12C(α , γ) 16O reaction to date. This research was supported in part by the ND CRC and funded by the NSF through Grant No. Phys-0758100, and JINA through Grant No. Phys-0822648.
NASA Astrophysics Data System (ADS)
Ducasse, Q.; Jurado, B.; Mathieu, L.; Marini, P.; Morillon, B.; Aiche, M.; Tsekhanovich, I.
2016-08-01
The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the 238U(d,p)239U and 238U(3He,d)239Np reactions. We have performed Hauser-Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of 239Np below the neutron separation energy allowed us to validate the EXEM.
NASA Technical Reports Server (NTRS)
Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.
1993-01-01
Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of over 500 nmol/sq m/h occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of about 60 nmol/sq m/h, which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat TM were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emissions were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities accounted for about 24 percent of the total S emissions.
Sprecher, D; Beyer, M; Merkt, F
2013-01-01
Recent experiments are reviewed which have led to the determination of the ionization and dissociation energies of molecular hydrogen with a precision of 0.0007 cm(-)1 (8 mJ/mol or 20 MHz) using a procedure based on high-resolution spectroscopic measurements of high Rydberg states and the extrapolation of the Rydberg series to the ionization thresholds. Molecular hydrogen, with only two protons and two electrons, is the simplest molecule with which all aspects of a chemical bond, including electron correlation effects, can be studied. Highly precise values of its ionization and dissociation energies provide stringent tests of the precision of molecular quantum mechanics and of quantum-electrodynamics calculations in molecules. The comparison of experimental and theoretical values for these quantities enable one to quantify the contributions to a chemical bond that are neglected when making the Born-Oppenheimer approximation, i.e. adiabatic, nonadiabatic, relativistic, and radiative corrections. Ionization energies of a broad range of molecules can now be determined experimentally with high accuracy (i.e. about 0.01 cm(-1)). Calculations at similar accuracies are extremely challenging for systems containing more than two electrons. The combination of precision measurements of molecular ionization energies with highly accurateab initio calculations has the potential to provide, in future, fully reliable sets of thermochemical quantities for gas-phase reactions. PMID:23967701
Powers, Jennifer S.; Corre, Marife D.; Twine, Tracy E.; Veldkamp, Edzo
2011-01-01
Accurately quantifying changes in soil carbon (C) stocks with land-use change is important for estimating the anthropogenic fluxes of greenhouse gases to the atmosphere and for implementing policies such as REDD (Reducing Emissions from Deforestation and Degradation) that provide financial incentives to reduce carbon dioxide fluxes from deforestation and land degradation. Despite hundreds of field studies and at least a dozen literature reviews, there is still considerable disagreement on the direction and magnitude of changes in soil C stocks with land-use change. We conducted a meta-analysis of studies that quantified changes in soil C stocks with land use in the tropics. Conversion from one land use to another caused significant increases or decreases in soil C stocks for 8 of the 14 transitions examined. For the three land-use transitions with sufficient observations, both the direction and magnitude of the change in soil C pools depended strongly on biophysical factors of mean annual precipitation and dominant soil clay mineralogy. When we compared the distribution of biophysical conditions of the field observations to the area-weighted distribution of those factors in the tropics as a whole or the tropical lands that have undergone conversion, we found that field observations are highly unrepresentative of most tropical landscapes. Because of this geographic bias we strongly caution against extrapolating average values of land-cover change effects on soil C stocks, such as those generated through meta-analysis and literature reviews, to regions that differ in biophysical conditions. PMID:21444813
Powers, Jennifer S; Corre, Marife D; Twine, Tracy E; Veldkamp, Edzo
2011-04-12
Accurately quantifying changes in soil carbon (C) stocks with land-use change is important for estimating the anthropogenic fluxes of greenhouse gases to the atmosphere and for implementing policies such as REDD (Reducing Emissions from Deforestation and Degradation) that provide financial incentives to reduce carbon dioxide fluxes from deforestation and land degradation. Despite hundreds of field studies and at least a dozen literature reviews, there is still considerable disagreement on the direction and magnitude of changes in soil C stocks with land-use change. We conducted a meta-analysis of studies that quantified changes in soil C stocks with land use in the tropics. Conversion from one land use to another caused significant increases or decreases in soil C stocks for 8 of the 14 transitions examined. For the three land-use transitions with sufficient observations, both the direction and magnitude of the change in soil C pools depended strongly on biophysical factors of mean annual precipitation and dominant soil clay mineralogy. When we compared the distribution of biophysical conditions of the field observations to the area-weighted distribution of those factors in the tropics as a whole or the tropical lands that have undergone conversion, we found that field observations are highly unrepresentative of most tropical landscapes. Because of this geographic bias we strongly caution against extrapolating average values of land-cover change effects on soil C stocks, such as those generated through meta-analysis and literature reviews, to regions that differ in biophysical conditions. PMID:21444813
On Sources of the Word Length Effect in Young Readers
ERIC Educational Resources Information Center
Gagl, Benjamin; Hawelka, Stefan; Wimmer, Heinz
2015-01-01
We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (H"aus"-B"auch"-S"chach"). In…
The effect of an increase in chain length on the mechanical properties of polyethylene glycols.
Al-Nasassrah, M A; Podczeck, F; Newton, J M
1998-07-01
The mechanical properties of different molecular weights of polyethylene glycol (PEG) have been determined by formation of compacted tablets and beams, which were subjected to diametral compression and 3-point bending, respectively. From diametral compression, the tensile strength for the different grades of PEG was determined. Flat beams made from powder by compaction were used to determine Young's modulus of elasticity. Beams into which a notch had been introduced after formation allowed the fracture mechanical parameters of critical stress intensity factor, K(IC), and fracture toughness, R, to be determined. Evaluation of these parameters as a function of compact porosity allowed extrapolation to values at zero porosity, providing an estimate of the material properties. The increase in chain length of the PEG was found to have a non-linear effect on tensile strength and Young's modulus. The ductility of the polymer increased proportionally to the increase in chain length, reflected by the linear relationship between K(IC) and the molecular weight. Young's modulus and critical stress intensity factor allowed the estimation of the strain energy release rate, G(IC), which is the driving force in crack propagation. Consequently, the tensile strength at zero porosity was found to be predictable from the values of G(IC) and the molecular weight of the different grades of PEG. PMID:9700020
Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn
2014-05-10
Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.
NASA Astrophysics Data System (ADS)
Jiang, Chaowei; Wu, S. T.; Feng, Xueshang; Hu, Qiang
2014-05-01
Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength <~ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.
King, A.W.
1986-01-01
Ecological models of the seasonal exchange of carbon dioxide (CO/sub 2/) between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. The primary disadvantage of this approach is the problem of extrapolating the site-specific models across large regions having considerable biotic, climatic, and edaphic heterogeneity. Two methods of extrapolation were tested. The first approach was a simple extrapolation that assumed relative within-biome homogeneity, and generated CO/sub 2/ source functions that differed dramatically from published estimates of CO/sub 2/ exchange. The second extrapolation explicitly incorporated within-biome variability in the abiotic variables that drive seasonal biosphere-atmosphere CO/sub 2/ exchange.
Scholze, Martin; Silva, Elisabete; Kortenkamp, Andreas
2014-01-01
Dose addition, a commonly used concept in toxicology for the prediction of chemical mixture effects, cannot readily be applied to mixtures of partial agonists with differing maximal effects. Due to its mathematical features, effect levels that exceed the maximal effect of the least efficacious compound present in the mixture, cannot be calculated. This poses problems when dealing with mixtures likely to be encountered in realistic assessment situations where chemicals often show differing maximal effects. To overcome this limitation, we developed a pragmatic solution that extrapolates the toxic units of partial agonists to effect levels beyond their maximal efficacy. We extrapolated different additivity expectations that reflect theoretically possible extremes and validated this approach with a mixture of 21 estrogenic chemicals in the E-Screen. This assay measures the proliferation of human epithelial breast cancers. We found that the dose-response curves of the estrogenic agents exhibited widely varying shapes, slopes and maximal effects, which made it necessary to extrapolate mixture responses above 14% proliferation. Our toxic unit extrapolation approach predicted all mixture responses accurately. It extends the applicability of dose addition to combinations of agents with differing saturating effects and removes an important bottleneck that has severely hampered the use of dose addition in the past. PMID:24533151
In practice, it is neither feasible nor ethical to conduct toxicity tests with all species that may be impacted by chemical exposures. Therefore, cross-species extrapolation is fundamental to human health and ecological risk assessment. The extensive chemical universe for which w...
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals.
Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
de Roche, Mirjam; Siccardi, Marco; Stoeckle, Marcel; Livio, Françoise; Back, David; Battegay, Manuel; Marzolini, Catia
2012-01-01
Here, we report suboptimal efavirenz exposure in an obese patient treated with the standard 600 mg dose. Tripling the dose allowed attainment of therapeutic efavirenz concentrations. We developed an in vitro-in vivo extrapolation model to quantify dose requirements in obese individuals. Obesity represents a risk factor for antiretroviral therapy underdosing. PMID:22910127
Low-dose extrapolation model selection for evaluating the health effects of environmental pollutants is a key component of the risk assessment process. At a workshop held in Baltimore, MD, on April 23-24, 2007, and sponsored by U.S. Environmental Protection Agency (EPA) and Johns...
Random Test Run Length and Effectiveness
NASA Technical Reports Server (NTRS)
Andrews, James H.; Groce, Alex; Weston, Melissa; Xu, Ru-Gang
2008-01-01
A poorly understood but important factor in many applications of random testing is the selection of a maximum length for test runs. Given a limited time for testing, it is seldom clear whether executing a small number of long runs or a large number of short runs maximizes utility. It is generally expected that longer runs are more likely to expose failures -- which is certainly true with respect to runs shorter than the shortest failing trace. However, longer runs produce longer failing traces, requiring more effort from humans in debugging or more resources for automated minimization. In testing with feedback, increasing ranges for parameters may also cause the probability of failure to decrease in longer runs. We show that the choice of test length dramatically impacts the effectiveness of random testing, and that the patterns observed in simple models and predicted by analysis are useful in understanding effects observed.
Flux saturation length of sediment transport.
Pähtz, Thomas; Kok, Jasper F; Parteli, Eric J R; Herrmann, Hans J
2013-11-22
Sediment transport along the surface drives geophysical phenomena as diverse as wind erosion and dune formation. The main length scale controlling the dynamics of sediment erosion and deposition is the saturation length Ls, which characterizes the flux response to a change in transport conditions. Here we derive, for the first time, an expression predicting Ls as a function of the average sediment velocity under different physical environments. Our expression accounts for both the characteristics of sediment entrainment and the saturation of particle and fluid velocities, and has only two physical parameters which can be estimated directly from independent experiments. We show that our expression is consistent with measurements of Ls in both aeolian and subaqueous transport regimes over at least 5 orders of magnitude in the ratio of fluid and particle density, including on Mars. PMID:24313529
Flux Saturation Length of Sediment Transport
NASA Astrophysics Data System (ADS)
Pähtz, Thomas; Kok, Jasper F.; Parteli, Eric J. R.; Herrmann, Hans J.
2013-11-01
Sediment transport along the surface drives geophysical phenomena as diverse as wind erosion and dune formation. The main length scale controlling the dynamics of sediment erosion and deposition is the saturation length Ls, which characterizes the flux response to a change in transport conditions. Here we derive, for the first time, an expression predicting Ls as a function of the average sediment velocity under different physical environments. Our expression accounts for both the characteristics of sediment entrainment and the saturation of particle and fluid velocities, and has only two physical parameters which can be estimated directly from independent experiments. We show that our expression is consistent with measurements of Ls in both aeolian and subaqueous transport regimes over at least 5 orders of magnitude in the ratio of fluid and particle density, including on Mars.
The persistence length of adsorbed dendronized polymers
NASA Astrophysics Data System (ADS)
Grebikova, Lucie; Kozhuharov, Svilen; Maroni, Plinio; Mikhaylov, Andrey; Dietler, Giovanni; Schlüter, A. Dieter; Ullner, Magnus; Borkovec, Michal
2016-07-01
The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth generation polymer adsorbed on mica, which is a hydrophilic and highly charged substrate. However, the observed dependence on the ionic strength is much weaker than the one predicted by the Odijk, Skolnik, and Fixman (OSF) theory for semi-flexible chains. Low-generation polymers show a variation with the ionic strength that resembles the one observed for simple and flexible polyelectrolytes in solution. For high-generation polymers, this dependence is weaker. Similar dependencies are found for silica and gold substrates. The observed behavior is probably caused by different extents of screening of the charged groups, which is modified by the polymer generation, and to a lesser extent, the nature of the substrate. For highly ordered pyrolytic graphite (HOPG), which is a hydrophobic and weakly charged substrate, the electrostatic contribution to the persistence length is much smaller. In the latter case, we suspect that specific interactions between the polymer and the substrate also play an important role.The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth
Length and Dimensional Measurements at NIST
Swyt, Dennis A.
2001-01-01
This paper discusses the past, present, and future of length and dimensional measurements at NIST. It covers the evolution of the SI unit of length through its three definitions and the evolution of NBS-NIST dimensional measurement from early linescales and gage blocks to a future of atom-based dimensional standards. Current capabilities include dimensional measurements over a range of fourteen orders of magnitude. Uncertainties of measurements on different types of material artifacts range down to 7×10−8 m at 1 m and 8 picometers (pm) at 300 pm. Current work deals with a broad range of areas of dimensional metrology. These include: large-scale coordinate systems; complex form; microform; surface finish; two-dimensional grids; optical, scanning-electron, atomic-force, and scanning-tunneling microscopies; atomic-scale displacement; and atom-based artifacts.
Distance and Cable Length Measurement System
Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay
2009-01-01
A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169
Relevant length scale of barchan dunes.
Hersen, Pascal; Douady, Stéphane; Andreotti, Bruno
2002-12-23
A new experiment can create small scale barchan dunes under water: some sand is put on a tray moving periodically and asymmetrically in a water tank, and barchans rapidly form. We measure basic morphological and dynamical properties of these dunes and compare them to field data. These favorable results demonstrate experimentally the relevance of the so-called "saturation length" for the control of the dunes physics. PMID:12484824
A NOTE ON PERPENDICULAR SCATTERING LENGTHS
Tautz, R. C.
2009-10-01
The problem of cosmic ray diffusion in magnetostatic slab turbulence is revisited. It is known that, for large timescales, the perpendicular diffusion coefficient is subdiffusive. Although, for small timescales, the field line random walk limit should apply, it is shown that the perpendicular motion is dominated by the Larmor orbit, and that no constant scattering length can be seen. It is therefore concluded that, in magnetostatic slab turbulence, perpendicular transport is completely suppressed.
Slip length crossover on a graphene surface
Liang, Zhi; Keblinski, Pawel
2015-04-07
Using equilibrium and non-equilibrium molecular dynamics simulations, we study the flow of argon fluid above the critical temperature in a planar nanochannel delimited by graphene walls. We observe that, as a function of pressure, the slip length first decreases due to the decreasing mean free path of gas molecules, reaches the minimum value when the pressure is close to the critical pressure, and then increases with further increase in pressure. We demonstrate that the slip length increase at high pressures is due to the fact that the viscosity of fluid increases much faster with pressure than the friction coefficient between the fluid and the graphene. This behavior is clearly exhibited in the case of graphene due to a very smooth potential landscape originating from a very high atomic density of graphene planes. By contrast, on surfaces with lower atomic density, such as an (100) Au surface, the slip length for high fluid pressures is essentially zero, regardless of the nature of interaction between fluid and the solid wall.
Determining multiple length scales in rocks
NASA Astrophysics Data System (ADS)
Song, Yi-Qiao; Ryu, Seungoh; Sen, Pabitra N.
2000-07-01
Carbonate reservoirs in the Middle East are believed to contain about half of the world's oil. The processes of sedimentation and diagenesis produce in carbonate rocks microporous grains and a wide range of pore sizes, resulting in a complex spatial distribution of pores and pore connectivity. This heterogeneity makes it difficult to determine by conventional techniques the characteristic pore-length scales, which control fluid transport properties. Here we present a bulk-measurement technique that is non-destructive and capable of extracting multiple length scales from carbonate rocks. The technique uses nuclear magnetic resonance to exploit the spatially varying magnetic field inside the pore space itself-a `fingerprint' of the pore structure. We found three primary length scales (1-100µm) in the Middle-East carbonate rocks and determined that the pores are well connected and spatially mixed. Such information is critical for reliably estimating the amount of capillary-bound water in the rock, which is important for efficient oil production. This method might also be used to complement other techniques for the study of shaly sand reservoirs and compartmentalization in cells and tissues.
[Cytoskeletal control of cell length regulation].
Kharitonova, M A; Levina, C M; Rovenskii, I A
2002-01-01
It was shown that mouse embryo fibroblasts and human foreskin diploid fibroblasts of AGO 1523 line cultivated on specially prepared substrates with narrow (15 +/- 3 microns) linear adhesive strips were elongated and oriented along the strips, but the mean lengths of the fibroblasts of each type on the strips differed from those on the standard culture substrates. In contrast to the normal fibroblasts, the length of mouse embryonic fibroblasts with inactivated gene-suppresser Rb responsible for negative control of cell proliferation (MEF Rb-/-), ras-transformed mouse embryonic fibroblasts (MEF Rb-/-ras), or normal rat epitheliocytes of IAR2 line significantly exceeded those of the same cells on the standard culture substrates. The results of experiments with the drugs specifically affecting the cytoskeleton (colcemid and cytochalasin D) suggest that the constant mean length of normal fibroblasts is controlled by a dynamic equilibrium between two forces: centripetal tension of contractile actin-myosin microfilaments and centrifugal force generated by growing microtubules. This cytoskeletal mechanism is disturbed in MEF Rb-/- or MEF Rb-/-ras, probably, because of an impaired actin cytoskeleton and also in IAR2 epitheliocytes due to the different organization of the actin-myosin system in these cells, as compared to that in the fibroblasts. PMID:11862697
Importance of cervical length in dysmenorrhoea aetiology.
Zebitay, Ali G; Verit, Fatma F; Sakar, M Nafi; Keskin, Seda; Cetin, Orkun; Ulusoy, A Ibrahim
2016-05-01
The objective of this prospective case-control study was to determine whether uterine corpus and cervical length measurements have a role in dysmenorrhoea aetiology in virgins. Patients with severe primary dysmenorrhoea with visual analog scale scores of ≥7 composed the dysmenorrhoea group (n = 51), while the control group (n = 51) was of women with painless menstrual cycles or with mild pain. Longitudinal and transverse axes of the uterine cervix and uterine corpus were measured. Correlation between severity of dysmenorrhoea and uterine cervix and corpus axes was calculated. Longitudinal and transverse axes of uterine cervix as well as uterine cervix volume were significantly higher in the dysmenorrhoea group compared to the controls. There was a significant positive correlation between severity of dysmenorrhoea and the length of cervical longitudinal and transverse axes and uterine cervical volume. Our findings reveal longer cervical length and greater cervical volume in young virgin patients with dysmenorrhoea and severe pain compared to those with no or less pain. PMID:27012227
Measuring scattering lengths of gaseous samples
NASA Astrophysics Data System (ADS)
Huber, M. G.; Black, T. C.; Haun, R.; Pushin, D. A.; Shahi, C. B.; Weitfeldt, F. E.
2016-03-01
Neutron interferometry represents one of the most precise techniques for measuring the coherent scattering lengths (bc) of particular nuclear isotopes. Currently bc for helium-4 is known only to 1% relative uncertainty; a factor of ten higher than precision measurements of other light isotopes. Scattering lengths are measured using a neutron interferometer and by comparing the phase shift a neutron acquires as it passes through a gaseous sample relative to that of a neutron passing through vacuum. The density of the gas is determined by continuous monitoring of the sample's temperature and pressure. Challenges for these types of experiments include achieving the necessary long-term phase stability and accurate determination of the phase shift caused by the aluminum cell used to hold the gas; a phase shift many times greater than that of the sample. The present status on the effort to measure the n-4He scattering length at the NIST center for Neutron Research will be given. Financial support provided by the NSERC `Create' and `Discovery' programs, CERC, NIST and NSF Grant PHY-1205342.
Determining multiple length scales in rocks
Song; Ryu; Sen
2000-07-13
Carbonate reservoirs in the Middle East are believed to contain about half of the world's oil. The processes of sedimentation and diagenesis produce in carbonate rocks microporous grains and a wide range of pore sizes, resulting in a complex spatial distribution of pores and pore connectivity. This heterogeneity makes it difficult to determine by conventional techniques the characteristic pore-length scales, which control fluid transport properties. Here we present a bulk-measurement technique that is nondestructive and capable of extracting multiple length scales from carbonate rocks. The technique uses nuclear magnetic resonance to exploit the spatially varying magnetic field inside the pore space itself--a 'fingerprint' of the pore structure. We found three primary length scales (1-100 microm) in the Middle-East carbonate rocks and determined that the pores are well connected and spatially mixed. Such information is critical for reliably estimating the amount of capillary-bound water in the rock, which is important for efficient oil production. This method might also be used to complement other techniques for the study of shaly sand reservoirs and compartmentalization in cells and tissues. PMID:10910355
[Measuring leg length and leg length difference with the method of real time sonography].
Holst, A; Thomas, W
1988-06-01
A brief presentation of the clinical and radiological methods to measure the leg length and the leg length difference is followed by an outline of the new diagnostic method for measuring the leg length and the leg length difference by means of real-time sonography. Tests conducted on corpses, as well as clinical examples, show that sonography is an ideal method for determining the exact lengths of the femur and tibia. The joint gaps on the hip joint, knee joint and upper ankle joint can be visualised by means of a 5 MHz linear scanner. A 1 mm strong metal bar on the skin and under the scanner are positioned at a right angle to the longitudinal axis of the body so that the bar can be seen in the centre of each joint gap by means of real-time sonography. A measuring device gives the distances of the joint gaps in cm so that the differences correspond to the real length of femur and tibia. This standardised measuring procedure is done by a specially developed bearing and measuring device. The results of the sonographical measurings on 20 corpses and checking after consecutive dissections showed in 75% of the cases a 100% sonographic measuring accuracy of the total leg length. The separately considered results for femur (85%) and tibia (90%) were even better. The maximum sonographic measuring fault was 1.0 cm for the femur (in one case) and 0.5 cm for the tibia, respectively. Thus, sonographic measuring of the leg length offers a reliable, non-invasive and easily performed new method that can be repeated any number of times. It is ideal for the development control of therapeutically influenced as well as spontaneous transformations of leg length differences. PMID:3071879
NASA Astrophysics Data System (ADS)
Rhinelander, Marcus Q.; Dawson, Stephen M.
2004-04-01
Multiple pulses can often be distinguished in the clicks of sperm whales (Physeter macrocephalus). Norris and Harvey [in Animal Orientation and Navigation, NASA SP-262 (1972), pp. 397-417] proposed that this results from reflections within the head, and thus that interpulse interval (IPI) is an indicator of head length, and by extrapolation, total length. For this idea to hold, IPIs must be stable within individuals, but differ systematically among individuals of different size. IPI stability was examined in photographically identified individuals recorded repeatedly over different dives, days, and years. IPI variation among dives in a single day and days in a single year was statistically significant, although small in magnitude (it would change total length estimates by <3%). As expected, IPIs varied significantly among individuals. Most individuals showed significant increases in IPIs over several years, suggesting growth. Mean total lengths calculated from published IPI regressions were 13.1 to 16.1 m, longer than photogrammetric estimates of the same whales (12.3 to 15.3 m). These discrepancies probably arise from the paucity of large (12-16 m) whales in data used in published regressions. A new regression is offered for this size range.
Ultrasound velocities for axial eye length measurement.
Hoffer, K J
1994-09-01
Since 1974, I have used individual sound velocities for each eye condition encountered for axial length measurement. The calculation results in 1,555 M/sec for the average phakic eye. A slower speed of 1,549 M/sec was found for an extremely long (30 mm) eye and a higher speed of 1,561 M/sec was noted for an extremely short (20 mm) eye. This inversely proportional velocity change can best be adjusted for by measuring the phakic eye at 1,532 M/sec and correcting the result by dividing the square of the measured axial length (AL1,532)2 by the difference of the measured axial length (AL1,532) minus 0.35 mm. A velocity of 1,534 M/sec was found for all aphakic eyes regardless of their length, and correction is clinically significant. The velocity of an eye containing a poly(methyl methacrylate) intraocular lens is not different from an average phakic eye but it does magnify the effect of axial length change. I recommend measuring the pseudophakic eye at 1,532 M/sec and adding to the result (AL1,532), + 0.04 + 44% of the IOL thickness. The speed for an eye with a silicone IOL was found to be 1,476 M/sec (or AL1,532 + 0.04 - 56% of IOL thickness) and for glass, 1,549 M/sec (or AL1,532 + 0.04 + 75% of IOL thickness). A speed of 1,139 M/sec was found for a phakic eye with silicone oil filling most of the vitreous cavity and 1,052 M/sec for an aphakic eye filled with oil. For varying volumes of oil, each eye should be calculated individually. The speed was 534 M/sec for phakic eyes filled with gas. Eyes containing a silicone IOL or oil or gas will create clinically significant errors (3 to 10 diopters) if the sound velocity is not corrected. PMID:7996413
Ward, D. B.; Clement, P.; Bostick, K.
2002-02-26
Geostatistical interpolation of groundwater characterization data to visualize contaminant distributions in three dimensions is often hindered by the sparse distribution of samples relative to the size of the plume and scale of heterogeneities. Typically, placement of expensive monitoring wells is guided by the conceptualized plume rather than geostatistical considerations, focusing on contaminated areas rather than thoroughly gridding the plume boundary. The resulting data sets require careful analysis in order to produce plausible plume shells. A purely geostatistical approach is usually impractical; kriging parameters based on the observed data structure can extrapolate contamination far beyond the demonstrated extent of the plume. When more appropriate kriging parameters are selected, holes often occur in the interpolated distribution because realistic kriging ranges may not bridge large gaps between data points. Such artifacts obscure the probable location of the plume boundary and distort the contaminant distribution, obstructing quantitative modeling of remedial strategies. Two methods of constraining kriging can successfully eliminate these geostatistical artifacts. Laterally, the plume boundary may be controlled using a manually constructed mask that delineates the plan-view extent of the plume. After kriging, the mask is used to set all grid cells outside of the plume to a concentration of zero. Use of non-zero control points is a more refined but laborious approach that also bridges data gaps within the body of a plume and permits use of tighter kriging parameters. These can be obtained by manual linear interpolation between measured samples, or derived from historical data migrated along flow paths while accounting for all attenuative processes. Masking and use of non-zero control points result in a plume shell that reflects the intuition and professional judgment of the hydrologist, and can be interpolated automatically to any desired grid, providing
NASA Technical Reports Server (NTRS)
Cohen, Martin; Witteborn, Fred C.; Carbon, Duane F.; Davies, John K.; Wooden, Diane H.; Bregman, Jesse D.
1996-01-01
We present five new absolutely calibrated continuous stellar spectra constructed as far as possible from spectral fragments observed from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer. These stars-alpha Boo, gamma Dra, alpha Cet, gamma Cru, and mu UMa-augment our six, published, absolutely calibrated spectra of K and early-M giants. All spectra have a common calibration pedigree. A revised composite for alpha Boo has been constructed from higher quality spectral fragments than our previously published one. The spectrum of gamma Dra was created in direct response to the needs of instruments aboard the Infrared Space Observatory (ISO); this star's location near the north ecliptic pole renders it highly visible throughout the mission. We compare all our low-resolution composite spectra with Kurucz model atmospheres and find good agreement in shape, with the obvious exception of the SiO fundamental, still lacking in current grids of model atmospheres. The CO fundamental seems slightly too deep in these models, but this could reflect our use of generic models with solar metal abundances rather than models specific to the metallicities of the individual stars. Angular diameters derived from these spectra and models are in excellent agreement with the best observed diameters. The ratio of our adopted Sirius and Vega models is vindicated by spectral observations. We compare IRAS fluxes predicted from our cool stellar spectra with those observed and conclude that, at 12 and 25 microns, flux densities measured by IRAS should be revised downwards by about 4.1% and 5.7%, respectively, for consistency with our absolute calibration. We have provided extrapolated continuum versions of these spectra to 300 microns, in direct support of ISO (PHT and LWS instruments). These spectra are consistent with IRAS flux densities at 60 and 100 microns.
Waters, Nigel J
2015-01-01
Aims Understanding drug–drug interactions (DDI) is a critical part of the drug development process as polypharmacy has become commonplace in many therapeutic areas including the cancer patient population. The objectives of this study were to investigate cytochrome P450 (CYP)-mediated DDI profiles available for therapies used in the oncology setting and evaluate how models based on in vitro–in vivo extrapolation performed in predicting CYP-mediated DDI risk. Methods A dataset of 125 oncology therapies was collated using drug label and approval history information, incorporating in vitro and clinical PK data. The predictive accuracy of the basic and net effect mechanistic static models was assessed using this oncology drug dataset, for both victim and perpetrator potential of CYP3A-mediated DDI. Results The incidence of CYP3A-mediated interaction potential was 47%, 22% and 11% for substrates, inhibitors and inducers, respectively. The basic models for precipitants gave conservative predictions with no false negatives, whilst the mechanistic static models provided reasonable quantitative predictions (2.3–3-fold error). Further analysis revealed that incorporating DDI at the level of the intestine was in most cases over-predicting interaction magnitude due to overestimates of the rate and extent of oral absorption of the precipitant. Quantifying victim DDI potential was also demonstrated using fmCYP3A estimates from ketoconazole clinical DDI studies to predict the magnitude of interaction on co-administration with the CYP3A inducer, rifampicin (1.6–3.3 fold error). Conclusions This work illustrates the utility and limitations of current DDI risk assessment approaches applied to a range of contemporary anti-cancer agents, and discusses the implications for therapeutic combination strategies. PMID:25443889
Li, Zhaojun; Yang, Hua; Li, Yupeng; Long, Jian; Liang, Yongchao
2014-01-01
There has been increasing concern in recent years regarding lead (Pb) transfer in the soil-plant system. In this study the transfer of Pb (exogenous salts) was investigated from a wide range of Chinese soils to corn grain (Zhengdan 958). Prediction models were developed with combination of the Pb bioconcentration factor (BCF) of Zhengdan 958, and soil pH, organic matter (OM) content, and cation exchange capacity (CEC) through multiple stepwise regressions. Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the soil pH and OM were the major factors that controlled Pb transfer from soil to corn grain. The lower pH and OM could improve the bioaccumulation of Pb in corn grain. No significant differences were found between two prediction models derived from the different exogenous Pb contents. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2-fold and close to the solid line of 1∶1 relationship. Moreover, the prediction model i.e. Log[BCF] = −0.098 pH-0.150 log[OM] −1.894 at the treatment of high Pb can effectively reduce the measured BCF intra-species variability for all non-model corn species. These suggested that this prediction model derived from the high Pb content was more adaptable to be applied to other non-model corn species to predict the Pb bioconcentration in corn grain and assess the ecological risk of Pb in different agricultural soils. PMID:24416440
Monteiro-Riviere, Nancy A.; Samberg, Meghan E.; Oldenburg, Steven J.; Riviere, Jim E.
2013-01-01
Nanoparticles (NP) absorbed in the body will come in contact with blood proteins and form NP/protein complexes termed protein coronas, which may modulate NP cellular uptake. This study quantitated human epidermal keratinocyte (HEK) uptake of silver (Ag) NP complexed to different human serum proteins. Prior to HEK dosing, AgNP (20 nm and 110 nm citrate BioPure™; 40 nm and 120 nm silica-coated) were preincubated for 2 h at 37 °C without (control) or with physiological levels of albumin (44 mg/ml), IgG (14.5 mg/ml) or transferrin (3 mg/ml) to form protein-complexed NP. HEK were exposed to the protein incubated AgNP for 3 h, rinsed and incubated for 24 h, rinsed in buffer and lysed. Ag was assayed by inductively-coupled plasma optical emission spectrometry. Uptake of Ag in HEK was <4.1% of applied dose with proteins suppressing citrate, but not silica coated Ag uptake. IgG exposure dramatically reduced 110 nm citrate AgNP uptake. In contrast, greatest uptake of 20 nm silica AgNP was seen with IgG, while 110 nm silica AgNP showed minimal protein effects. Electron microscopy confirmed cellular uptake of all NP but showed differences in the appearance and agglomeration state of the NP within HEK vacuoles. This work suggests that NP association with different serum proteins, purportedly forming different protein coronas, significantly modulates Ag uptake into HEK compared to native NP uptake, suggesting caution in extrapolating in vitro uptake data to predict behavior in vivo where the nature of the protein corona may determine patterns of cellular uptake, and thus biodistribution, biological activity and toxicity. PMID:23660336
Folding of Layers of Finite Length
NASA Astrophysics Data System (ADS)
Schmid, D. W.; Podladchikov, Yu. Yu.; Marques, F.
All existing folding theories assume that the layers are infinitely long or, which is mathematically equivalent, that the compression is directly applied to the lateral boundaries. These assumptions are not always justified for natural geological sys- tems. In fact we can observe that on all scales, from veins to sub-ducting slabs, the layers are of finite length and that there are no distinct, rigid walls pushing the lay- ers from the side. Using the method of Muskhelishvili we have derived the complete two-dimensional solution of an elliptic object embedded in a matrix and subject to far field boundary conditions; pure shear, simple shear and arbitrary combinations thereof. The rheology of the matrix is viscous, the layer may behave either elastically or viscous. Using the values from this background state analysis, stress, pressure and strain rate, we performed the classical linear stability analysis to examine the mech- anism of folding in the described setup. The resulting expressions maximum growth rates and dominant wavelengths are applicable to general geological systems; in the limit of an infinite aspect ratio of the layer the classical expressions of Biot are ob- tained for all other cases new expressions result. Our main results are: 1. Folding of finite length layers is controlled by the ratio of aspect ratio to competence contrast. 2. The described setup explains why in nature only folds can be observed with a rela- tively small wavelength to thickness ratio, suggesting small viscosity contrast 3. The problem of the unknown compressive stress value for the elastic layer is solved. 4. For finite length elastic layers the dominant wavelength selection shows a cubic, instead of square, root dependence. 5. A complete table, describing the folding in all the possible limits is presented and the applicability to natural systems discussed. All the presented results were checked numerically and/or with analogue models.
Length Scales in Bayesian Automatic Adaptive Quadrature
NASA Astrophysics Data System (ADS)
Adam, Gh.; Adam, S.
2016-02-01
Two conceptual developments in the Bayesian automatic adaptive quadrature approach to the numerical solution of one-dimensional Riemann integrals [Gh. Adam, S. Adam, Springer LNCS 7125, 1-16 (2012)] are reported. First, it is shown that the numerical quadrature which avoids the overcomputing and minimizes the hidden floating point loss of precision asks for the consideration of three classes of integration domain lengths endowed with specific quadrature sums: microscopic (trapezoidal rule), mesoscopic (Simpson rule), and macroscopic (quadrature sums of high algebraic degrees of precision). Second, sensitive diagnostic tools for the Bayesian inference on macroscopic ranges, coming from the use of Clenshaw-Curtis quadrature, are derived.
Slip length measurement of gas flow.
Maali, Abdelhamid; Colin, Stéphane; Bhushan, Bharat
2016-09-16
In this paper, we present a review of the most important techniques used to measure the slip length of gas flow on isothermal surfaces. First, we present the famous Millikan experiment and then the rotating cylinder and spinning rotor gauge methods. Then, we describe the gas flow rate experiment, which is the most widely used technique to probe a confined gas and measure the slip. Finally, we present a promising technique using an atomic force microscope introduced recently to study the behavior of nanoscale confined gas. PMID:27505860
Slip length measurement of gas flow
NASA Astrophysics Data System (ADS)
Maali, Abdelhamid; Colin, Stéphane; Bhushan, Bharat
2016-09-01
In this paper, we present a review of the most important techniques used to measure the slip length of gas flow on isothermal surfaces. First, we present the famous Millikan experiment and then the rotating cylinder and spinning rotor gauge methods. Then, we describe the gas flow rate experiment, which is the most widely used technique to probe a confined gas and measure the slip. Finally, we present a promising technique using an atomic force microscope introduced recently to study the behavior of nanoscale confined gas.
Flux saturation length of sediment transport
NASA Astrophysics Data System (ADS)
Pähtz, T.; Kok, J. F.
2013-12-01
Sediment transport along the surface ("bedload", "saltation") drives geophysical phenomena as diverse as wind erosion and dune formation. The main length-scale controlling the dynamics of sediment erosion and deposition is the saturation length L, which characterizes the flux response to a change in transport conditions. L partially determines the dynamics of bedforms, such as dunes, for instance by dictating the wavelength of elementary dunes on a sediment surface and the minimal size of crescent-shaped barchan dunes. Here, we present an analytical model predicting L as a function of the average sediment velocity under different physical environments. Our model accounts for both the characteristics of sediment entrainment and the saturation of particle and fluid velocities, and has only two physical parameters which we estimated directly from independent experiments. We show that our model is consistent with measurements of L in both aeolian and subaqueous transport regimes over at least five orders of magnitude in the ratio of fluid and particle density, including on Mars.
The length distribution of frangible biofilaments
NASA Astrophysics Data System (ADS)
Michaels, Thomas C. T.; Yde, Pernille; Willis, Julian C. W.; Jensen, Mogens H.; Otzen, Daniel; Dobson, Christopher M.; Buell, Alexander K.; Knowles, Tuomas P. J.
2015-10-01
A number of different proteins possess the ability to polymerize into filamentous structures. Certain classes of such assemblies can have key functional roles in the cell, such as providing the structural basis for the cytoskeleton in the case of actin and tubulin, while others are implicated in the development of many pathological conditions, including Alzheimer's and Parkinson's diseases. In general, the fragmentation of such structures changes the total number of filament ends, which act as growth sites, and hence is a key feature of the dynamics of filamentous growth phenomena. In this paper, we present an analytical study of the master equation of breakable filament assembly and derive closed-form expressions for the time evolution of the filament length distribution for both open and closed systems with infinite and finite monomer supply, respectively. We use this theoretical framework to analyse experimental data for length distributions of insulin amyloid fibrils and show that our theory allows insights into the microscopic mechanisms of biofilament assembly to be obtained beyond those available from the conventional analysis of filament mass only.
The probabilistic distribution of metal whisker lengths
Niraula, D. Karpov, V. G.
2015-11-28
Significant reliability concerns in multiple industries are related to metal whiskers, which are random high aspect ratio filaments growing on metal surfaces and causing shorts in electronic packages. We derive a closed form expression for the probabilistic distribution of metal whisker lengths. Our consideration is based on the electrostatic theory of metal whiskers, according to which whisker growth is interrupted when its tip enters a random local “dead region” of a weak electric field. Here, we use the approximation neglecting the possibility of thermally activated escapes from the “dead regions,” which is later justified. We predict a one-parameter distribution with a peak at a length that depends on the metal surface charge density and surface tension. In the intermediate range, it fits well the log-normal distribution used in the experimental studies, although it decays more rapidly in the range of very long whiskers. In addition, our theory quantitatively explains how the typical whisker concentration is much lower than that of surface grains. Finally, it predicts the stop-and-go phenomenon for some of the whiskers growth.
The length distribution of frangible biofilaments.
Michaels, Thomas C T; Yde, Pernille; Willis, Julian C W; Jensen, Mogens H; Otzen, Daniel; Dobson, Christopher M; Buell, Alexander K; Knowles, Tuomas P J
2015-10-28
A number of different proteins possess the ability to polymerize into filamentous structures. Certain classes of such assemblies can have key functional roles in the cell, such as providing the structural basis for the cytoskeleton in the case of actin and tubulin, while others are implicated in the development of many pathological conditions, including Alzheimer's and Parkinson's diseases. In general, the fragmentation of such structures changes the total number of filament ends, which act as growth sites, and hence is a key feature of the dynamics of filamentous growth phenomena. In this paper, we present an analytical study of the master equation of breakable filament assembly and derive closed-form expressions for the time evolution of the filament length distribution for both open and closed systems with infinite and finite monomer supply, respectively. We use this theoretical framework to analyse experimental data for length distributions of insulin amyloid fibrils and show that our theory allows insights into the microscopic mechanisms of biofilament assembly to be obtained beyond those available from the conventional analysis of filament mass only. PMID:26520548
The probabilistic distribution of metal whisker lengths
NASA Astrophysics Data System (ADS)
Niraula, D.; Karpov, V. G.
2015-11-01
Significant reliability concerns in multiple industries are related to metal whiskers, which are random high aspect ratio filaments growing on metal surfaces and causing shorts in electronic packages. We derive a closed form expression for the probabilistic distribution of metal whisker lengths. Our consideration is based on the electrostatic theory of metal whiskers, according to which whisker growth is interrupted when its tip enters a random local "dead region" of a weak electric field. Here, we use the approximation neglecting the possibility of thermally activated escapes from the "dead regions," which is later justified. We predict a one-parameter distribution with a peak at a length that depends on the metal surface charge density and surface tension. In the intermediate range, it fits well the log-normal distribution used in the experimental studies, although it decays more rapidly in the range of very long whiskers. In addition, our theory quantitatively explains how the typical whisker concentration is much lower than that of surface grains. Finally, it predicts the stop-and-go phenomenon for some of the whiskers growth.
Cellular Mechanisms of Ciliary Length Control
Keeling, Jacob; Tsiokas, Leonidas; Maskey, Dipak
2016-01-01
Cilia and flagella are evolutionarily conserved, membrane-bound, microtubule-based organelles on the surface of most eukaryotic cells. They play important roles in coordinating a variety of signaling pathways during growth, development, cell mobility, and tissue homeostasis. Defects in ciliary structure or function are associated with multiple human disorders called ciliopathies. These diseases affect diverse tissues, including, but not limited to the eyes, kidneys, brain, and lungs. Many processes must be coordinated simultaneously in order to initiate ciliogenesis. These include cell cycle, vesicular trafficking, and axonemal extension. Centrioles play a central role in both cell cycle progression and ciliogenesis, making the transition between basal bodies and mitotic spindle organizers integral to both processes. The maturation of centrioles involves a functional shift from cell division toward cilium nucleation which takes place concurrently with its migration and fusion to the plasma membrane. Several proteinaceous structures of the distal appendages in mother centrioles are required for this docking process. Ciliary assembly and maintenance requires a precise balance between two indispensable processes; so called assembly and disassembly. The interplay between them determines the length of the resulting cilia. These processes require a highly conserved transport system to provide the necessary substances at the tips of the cilia and to recycle ciliary turnover products to the base using a based microtubule intraflagellar transport (IFT) system. In this review; we discuss the stages of ciliogenesis as well as mechanisms controlling the lengths of assembled cilia. PMID:26840332
The length-scaling properties of topography
NASA Technical Reports Server (NTRS)
Weissel, Jeffrey K.; Pratson, Lincoln F.; Malinverno, Alberto
1994-01-01
The scaling properties of synthetic topographic surfaces and digital elevation models (DEMs) of topography are examined by analyzing their 'structure functions,' i.e., the qth order powers of the absolute elevation differences: delta h(sub q) (l) = E((absolute value of h(x + l) - h(x))(exp q)). We find that the relation delta h(sub 1 l) approximately equal cl(exp H) describes well the scaling behavior of natural topographic surfaces, as represented by DEMs gridded at 3 arc sec. Average values of the scaling exponent H between approximately 0.5 and 0.7 characterize DEMs from Ethiopia, Saudi Arabia, and Somalia over 3 orders of magnitude range in length scale l (approximately 0.1-150 km). Differences in appparent topographic roughness among the three areas most likely reflect differences in the amplitude factor c. Separate determination of scaling properties in the x and y coordinate directions allows us to assess whether scaling exponents are azimuthally dependent (anisotropic) or whether they are isotropic while the surface itself is anisotropic over a restricted range of length scale. We explore ways to determine whether topographic surfaces are characterized by simple or multiscaling properties.
46 CFR 42.20-50 - Effective length of superstructure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... standard height shall be its length. (b) In all cases where an enclosed superstructure of standard height... the length modified by the ratio of b/Bs, where: “b” is the breadth of the superstructure at the middle of its length; “Bs” is the breadth of the vessel at the middle of the length of the...
46 CFR 42.20-50 - Effective length of superstructure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... standard height shall be its length. (b) In all cases where an enclosed superstructure of standard height... the length modified by the ratio of b/Bs, where: “b” is the breadth of the superstructure at the middle of its length; “Bs” is the breadth of the vessel at the middle of the length of the...
46 CFR 42.20-50 - Effective length of superstructure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... standard height shall be its length. (b) In all cases where an enclosed superstructure of standard height... the length modified by the ratio of b/Bs, where: “b” is the breadth of the superstructure at the middle of its length; “Bs” is the breadth of the vessel at the middle of the length of the...
46 CFR 42.20-50 - Effective length of superstructure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... standard height shall be its length. (b) In all cases where an enclosed superstructure of standard height... the length modified by the ratio of b/Bs, where: “b” is the breadth of the superstructure at the middle of its length; “Bs” is the breadth of the vessel at the middle of the length of the...
Kujur, Alice; Bajaj, Deepak; Upadhyaya, Hari D.; Das, Shouvik; Ranjan, Rajeev; Shree, Tanima; Saxena, Maneesha S.; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C. L. L.; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K.; Parida, Swarup K.
2015-01-01
The genome-wide discovery and high-throughput genotyping of SNPs in chickpea natural germplasm lines is indispensable to extrapolate their natural allelic diversity, domestication, and linkage disequilibrium (LD) patterns leading to the genetic enhancement of this vital legume crop. We discovered 44,844 high-quality SNPs by sequencing of 93 diverse cultivated desi, kabuli, and wild chickpea accessions using reference genome- and de novo-based GBS (genotyping-by-sequencing) assays that were physically mapped across eight chromosomes of desi and kabuli. Of these, 22,542 SNPs were structurally annotated in different coding and non-coding sequence components of genes. Genes with 3296 non-synonymous and 269 regulatory SNPs could functionally differentiate accessions based on their contrasting agronomic traits. A high experimental validation success rate (92%) and reproducibility (100%) along with strong sensitivity (93–96%) and specificity (99%) of GBS-based SNPs was observed. This infers the robustness of GBS as a high-throughput assay for rapid large-scale mining and genotyping of genome-wide SNPs in chickpea with sub-optimal use of resources. With 23,798 genome-wide SNPs, a relatively high intra-specific polymorphic potential (49.5%) and broader molecular diversity (13–89%)/functional allelic diversity (18–77%) was apparent among 93 chickpea accessions, suggesting their tremendous applicability in rapid selection of desirable diverse accessions/inter-specific hybrids in chickpea crossbred varietal improvement program. The genome-wide SNPs revealed complex admixed domestication pattern, extensive LD estimates (0.54–0.68) and extended LD decay (400–500 kb) in a structured population inclusive of 93 accessions. These findings reflect the utility of our identified SNPs for subsequent genome-wide association study (GWAS) and selective sweep-based domestication trait dissection analysis to identify potential genomic loci (gene-associated targets) specifically
Factors affecting intrauterine contraceptive device performance. I. Endometrial cavity length.
Hasson, H M; Berger, G S; Edelman, D A
1976-12-15
The relationship of endometrial cavity length to intrauterine contraceptive device (IUD) performance was evaluated in 319 patients wearing three types of devices. The rate of events, defined as pregnancy, expulsion, or medical removal, increased significantly when the length of the IUD was equal to, exceeded, or was shorter by two or more centimeters than the length of the endometrial cavity. Total uterine length was found to be a less accurate prognostic indicator of IUD performance than endometrial cavity length alone. PMID:998687
NASA Astrophysics Data System (ADS)
Amore, Paolo; Boyd, John P.; Fernández, Francisco M.; Rösler, Boris
2016-05-01
We apply second order finite differences to calculate the lowest eigenvalues of the Helmholtz equation, for complicated non-tensor domains in the plane, using different grids which sample exactly the border of the domain. We show that the results obtained applying Richardson and Padé-Richardson extrapolations to a set of finite difference eigenvalues corresponding to different grids allow us to obtain extremely precise values. When possible we have assessed the precision of our extrapolations comparing them with the highly precise results obtained using the method of particular solutions. Our empirical findings suggest an asymptotic nature of the FD series. In all the cases studied, we are able to report numerical results which are more precise than those available in the literature.
NASA Astrophysics Data System (ADS)
McNiven, Andrea; Kron, Tomas
2004-08-01
A new technique for intensity modulated radiation therapy (IMRT) delivery is helical tomotherapy (HT). Like most IMRT delivery methods, HT utilizes many small fields as part of the treatment plan, which can be difficult to characterize. A novel technique for small field characterization, based on inter- and extrapolation of ion chamber readings, is presented in the context of HT. As a fan beam is characterized by its thickness and output factor, plane parallel chambers with different active volumes were used to scan the fan beam profiles. The fan beam thickness (FBT) can be determined from the thickness measured with the chamber by extrapolating to an infinitesimally small chamber size. The effective output was derived from the integral under the dose profile divided by the FBT. This was done for five FBTs and demonstrated a sharp fall off in dose when the FBT decreased below 8 mm. Similar techniques can be applied to other IMRT techniques to improve the characterization of various beam parameters.
NASA Astrophysics Data System (ADS)
Su, S. Y.; Lo, N. J.; Chang, W. I.; Huang, K. Y.
2012-07-01
Spatial extrapolation has become a sine qua non and an ad hoc major research focus in applied ecology in the latter half 20th century. Progressive innovations in data acquisition and processing technologies over the last few decades, especially in the fields of 3S (RS, GIS and GPS) and statistical modeling method, have greatly enhanced ecologists' capacity to face the challenge by enabling them to to describe patterns in nature over larger spatial scales and a greater level of details than ever before. Elaeocarpus japonicas (Japanese Elaeocarpus tree, JET) was selected for applying in the concurrent developed technology, such as ecological distribution modeling and ecological extrapolation. The GPS-located JET samples were introduced in a GIS for overlaying with five environmental layers (elevation, slope, aspect, terrain position and vegetation index derived from two-date SPOT-5 images) for ecological information extraction and model building. We created three sampling designs (SD), Tong-Feng samples for SD1, Kuan-Dau samples for SD2, and the merge of the two former datasets for SD3, according to watersheds, and the three SDs were used individually to test the extrapolation ability of predictive models. The results of the two-way extrapolation indicated it is hard to extend the predicted distribution patterns through different watersheds. The main reasons resulting in this outcome were the difference in microclimate and micro-terrain between these two watersheds. Consequently, the models built with SD3 were the more robust. The information of vegetation index in this study poorly improved the models, so we will adopt the hyperspectral data to overcome the shortage of the SPOT-5 images.
Eich, T; Sieglin, B; Scarabosio, A; Fundamenski, W; Goldston, R J; Herrmann, A
2011-11-18
Experimental measurements of the SOL power decay length (λ(q)) estimated from analysis of fully attached divertor heat load profiles from two tokamaks, JET and ASDEX Upgrade, are presented. Data was measured by means of infrared thermography. An empirical scaling reveals parametric dependency λ(q) in mm = 0.73B(T)(-0.78)q(cyl)(1.2)P(SOL)(0.1)R(geo)(0), where B(T)(T) describes the toroidal magnetic field, q(cyl) the cylindrical safety factor, P(SOL)(MW) the power crossing the separatrix and R(geo)(m) the major radius of the device. A comparison of these measurements to a heuristic particle drift-based model shows satisfactory agreement in both absolute magnitude and scaling. Extrapolation to ITER gives λ(q) ≃ 1 mm. PMID:22181888
Quantum criticality with two length scales
NASA Astrophysics Data System (ADS)
Shao, Hui; Guo, Wenan; Sandvik, Anders W.
2016-04-01
The theory of deconfined quantum critical (DQC) points describes phase transitions at absolute temperature T = 0 outside the standard paradigm, predicting continuous transformations between certain ordered states where conventional theory would require discontinuities. Numerous computer simulations have offered no proof of such transitions, instead finding deviations from expected scaling relations that neither were predicted by the DQC theory nor conform to standard scenarios. Here we show that this enigma can be resolved by introducing a critical scaling form with two divergent length scales. Simulations of a quantum magnet with antiferromagnetic and dimerized ground states confirm the form, proving a continuous transition with deconfined excitations and also explaining anomalous scaling at T > 0. Our findings revise prevailing paradigms for quantum criticality, with potential implications for many strongly correlated materials.
Critical length scale controls adhesive wear mechanisms
Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois
2016-01-01
The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270
Instantaneous Slip Length in Superhydrophobic Microchannels
NASA Astrophysics Data System (ADS)
Hemeda, Ahmed; Tafreshi, Hooman; VCU Team
2015-11-01
Superhydrophobic (SHP) surfaces can be used to reduce the skin-friction drag in a microchannel. This favorable effect, however, can deteriorate over time if the surface geometry is not designed properly. This study presents a mathematical means for studying the time-dependent drag-reduction in a microchannel enhanced with SHP grooves of varying geometries. The performance of an SHP groove is found to be dependent on the interplay between the effects of the apparent contact angle of the air-water interface and the initial volume of the groove. The instantaneous slip length is calculated by solving the Navier-Stokes equations for flow in a microchannel with such SHP grooves, and the results are compared with the studies in the literature. National Science Foundation CMMI 1029924 and CBET 1402655.
Optimal Length of Low Reynolds Number Nanopropellers.
Walker, D; Kübler, M; Morozov, K I; Fischer, P; Leshansky, A M
2015-07-01
Locomotion in fluids at the nanoscale is dominated by viscous drag. One efficient propulsion scheme is to use a weak rotating magnetic field that drives a chiral object. From bacterial flagella to artificial drills, the corkscrew is a universally useful chiral shape for propulsion in viscous environments. Externally powered magnetic micro- and nanomotors have been recently developed that allow for precise fuel-free propulsion in complex media. Here, we combine analytical and numerical theory with experiments on nanostructured screw-propellers to show that the optimal length is surprisingly short-only about one helical turn, which is shorter than most of the structures in use to date. The results have important implications for the design of artificial actuated nano- and micropropellers and can dramatically reduce fabrication times, while ensuring optimal performance. PMID:26030270
Quark ensembles with the infinite correlation length
Zinov’ev, G. M.; Molodtsov, S. V.
2015-01-15
A number of exactly integrable (quark) models of quantum field theory with the infinite correlation length have been considered. It has been shown that the standard vacuum quark ensemble—Dirac sea (in the case of the space-time dimension higher than three)—is unstable because of the strong degeneracy of a state, which is due to the character of the energy distribution. When the momentum cutoff parameter tends to infinity, the distribution becomes infinitely narrow, leading to large (unlimited) fluctuations. Various vacuum ensembles—Dirac sea, neutral ensemble, color superconductor, and BCS state—have been compared. In the case of the color interaction between quarks, the BCS state has been certainly chosen as the ground state of the quark ensemble.
Critical length scale controls adhesive wear mechanisms.
Aghababaei, Ramin; Warner, Derek H; Molinari, Jean-Francois
2016-01-01
The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270
Critical length scale controls adhesive wear mechanisms
NASA Astrophysics Data System (ADS)
Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois
2016-06-01
The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients.
Quantum criticality with two length scales.
Shao, Hui; Guo, Wenan; Sandvik, Anders W
2016-04-01
The theory of deconfined quantum critical (DQC) points describes phase transitions at absolute temperature T = 0 outside the standard paradigm, predicting continuous transformations between certain ordered states where conventional theory would require discontinuities. Numerous computer simulations have offered no proof of such transitions, instead finding deviations from expected scaling relations that neither were predicted by the DQC theory nor conform to standard scenarios. Here we show that this enigma can be resolved by introducing a critical scaling form with two divergent length scales. Simulations of a quantum magnet with antiferromagnetic and dimerized ground states confirm the form, proving a continuous transition with deconfined excitations and also explaining anomalous scaling at T > 0. Our findings revise prevailing paradigms for quantum criticality, with potential implications for many strongly correlated materials. PMID:26989196
Implications of decreasing surgical lengths of stay.
Board, N; Caplan, G
2000-01-01
A recent study at the Prince of Wales Hospital (PoW) compared health outcomes and user satisfaction for conventional clinical pathways with a shortened pathway incorporating day of surgery admission (DOSA), early discharge and post acute care domiciliary visits for two high volume, elective surgical procedures (herniorrhaphy and laparoscopic cholecystectomy). This paper quantifies cost differences between the control and intervention groups for nursing salaries and wages, other ward costs, pathology and imaging. The study verified and measured the lower resource use that accompanies a significant reduction in length of stay (LOS). Costs of pre- and post-operative domiciliary visits were calculated and offset against savings generated by the re-engineered clinical pathway. Average costs per separation were at least $239 (herniorrhaphy) and $265 (laparoscopic cholecystectomy) lower for those on the DOSA pathway with domiciliary post acute care. PMID:11010580
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Box codes of lengths 48 and 72
NASA Astrophysics Data System (ADS)
Solomon, G.; Jin, Y.
1993-11-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. PMID:24967977
NASA Astrophysics Data System (ADS)
Rudenko, George; Myshyakov, Ivan; Anfinogentov, Sergey
A possibility for satisfactory removing of azimuthal ambiguity in the transverse field of vector magnetograms and the extrapolation of magnetic fields independently of the region position on the solar disk is shown. It is demonstrated an exact correspondence between the calculated field and the nonpotential loop structure in a near-limb region. The new technique of azimuthal ambiguity removing consists of the following parts: -translation of data in the form of artificial Stokes parameters into the working "quasi-spherical" coordinate system with subsequent smoothing to reduce noise component of the transverse field and with the inverse transformation to the vector form; -FFT extrapolation of the boundary potential field with constant direction of the oblique derivative corresponding to the observed line-of-sight component in the "quasi-spherical" coordinate system; -modification of the Metropolis's minimum energy method to spherical geometry with no need for data grid uniformity. Based on a version of the optimization method from Rudenko and Myshyakov (2009, Solar Phys. V. 257, 28), we use magnetograms corrected with modification of the Metropolis's method as boundary conditions for magnetic field extrapolation in the nonlinear force-free approximation.
Arheden, H; Hellstrand, P
1991-01-01
1. Mechanical transients in fibre bundles of skinned smooth muscle of guinea-pig taenia coli at 21-22 degrees C were investigated by recording tension responses to length changes of up to 9%, complete within 0.3 ms. 2. The length-force relationship, recorded continuously during rapid stretch of a Ca(2+)-activated contracted muscle, was linear up to at least 2.5 times the isometric force, corresponding to a stretch of about 1%. The slope of the relationship (stiffness) increased with the velocity of stretch. 3. During rapid release (about 120 muscle lengths s-1) the length-force relationship was linear down to about 50% of the initial isometric force, reached at about 80 microseconds after the beginning of the release. At lower force the length-force relationship was concave upwards. The linear portion extrapolated to zero force at about -0.008 muscle lengths. In large releases the length-force plot approached the force baseline under an acute angle, and negative force was transiently exerted. 4. When the muscle was stretched back to the initial length after a shortening step, force transiently rose above the isometric force, but decayed back within a few milliseconds. Stiffness at the time of restretch was compared with that in the initial shortening step by plotting force vs. length, and was found to be decreased to 63% within 0.3 ms of a step to zero force. Stiffness decreased further with time at zero force, and after 256 ms was about 29% of the isometric value. 5. In rigor, caused by the introduction of ATP-free solution during the plateau of isometric contraction, fibre tension decreased to about 30% of the active tension, whereas stiffness relative to force increased; 82% of the initial stiffness in rigor was detected in a restretch immediately after a shortening step, decreasing to 59% at 256 ms. When the fibre was activated at suboptimal [Ca2+] to cause the same force as in rigor, stiffness was lower than in rigor and decreased more after a release. 6
NASA Astrophysics Data System (ADS)
Fang, Lei; Yang, Jian
2014-12-01
The Landsat derived differenced Normalized Burn Ratio (dNBR) is widely used for burn severity assessments. Studies of regional wildfire trends in response to climate change require consistency in dNBR mapping across multiple image dates, which may vary in atmospheric condition. Conversion of continuous dNBR images into categorical burn severity maps often requires extrapolation of dNBR thresholds from present fires for which field severity measurements such as Composite Burn Index (CBI) data are available, to historical fires for which CBI data are typically unavailable. Although differential atmospheric effects between image collection dates could lead to biased estimates of historical burn severity patterns, little is known concerning the influence of atmospheric effects on dNBR performance and threshold extrapolation. In this study, we compared the performance of dNBR calculated from six atmospheric correction methods using an optimality approach. The six correction methods included one partial (Top of atmosphere reflectance, TOA), two absolute, and three relative methods. We assessed how the correction methods affected the CBI-dNBR correlation and burn severity mapping in a Chinese boreal forest fire which occurred in 2010. The dNBR thresholds of the 2010 fire for each of the correction methods were then extrapolated to classify a historical fire from 2000. Classification accuracies of threshold extrapolations were assessed based on Cohen's Kappa analysis with 73 field-based validation plots. Our study found most correction methods improved mean dNBR optimality of the two fires. The relative correction methods generated 32% higher optimality than both TOA and absolute correction methods. All the correction methods yielded high CBI-dNBR correlations (mean R2 = 0.847) but distinctly different dNBR thresholds for severity classification of 2010 fire. Absolute correction methods could substantially increase optimality score, but were insufficient to provide a
Internode length in pisum: do the internode length genes effect growth in dark-grown plants?
Reid, J B
1983-07-01
Internode length in light-grown peas (Pisum sativum L.) is controlled by the interaction of genes occupying at least five major loci, Le, La, Cry, Na, and Lm. The present work shows that the genes at all of the loci examined (Le, Cry, and Na) also exert an effect on internode length in plants grown in complete darkness. Preliminary results using pure lines were verified using either segregating progenies or near isogenic lines. The major cause of the differences was due to a change in the number of cells per internode rather than to an alteration of the cell length. Since the genes occupying at least two of these loci, Le and Na, have been shown to be directly involved with gibberellin metabolism, it appears that gibberellins are not only essential for elongation in the dark but are limiting for elongation in the nana (extremely short, na), dwarf (Na le), and tall (Na Le) phenotypes. These results are supported by the large inhibitory effects of AMO 1618 treatments on stem elongation in dwarf and tall lines grown in the dark and the fact that applied gibberellic acid could overcome this inhibition and greatly promote elongation in a gibberellin-deficient na line. It is clear that the internode length genes, and in particular the alleles at the Le locus, are not acting by simply controlling the sensitivity of the plant to light. PMID:16663081
NASA Astrophysics Data System (ADS)
Hsieh, Shang Yu; Neubauer, Franz; Cloetingh, Sierd; Willingshofer, Ernst; Sokoutis, Dimitrios
2014-05-01
The internal structure of major strike-slip faults is still poorly understood, particularly how the deep structure could be inferred from its surface expression (Molnar and Dayem, 2011 and references therein). Previous analogue experiments suggest that the convergence angle is the most influential factor (Leever et al., 2011). Further analogue modeling may allow a better understanding how to extrapolate surface structures to the subsurface geometry of strike-slip faults. Various scenarios of analogue experiments were designed to represent strike-slip faults in nature from different geological settings. As such key parameters, which are investigated in this study include: (a) the angle of convergence, (b) the thickness of brittle layer, (c) the influence of a rheological weak layer within the crust, and (d) influence of a thick and rheologically weak layer at the base of the crust. The latter aimed to simulate the effect of a hot metamorphic core complex or an alignment of uprising plutons bordered by a transtensional/transpressional strike-slip fault. The experiments are aimed to explain first order structures along major transcurrent strike-slip faults such as the Altyn, Kunlun, San Andrea and Greendale (Darfield earthquake 2010) faults. The preliminary results show that convergence angle significantly influences the overall geometry of the transpressive system with greater convergence angles resulting in wider fault zones and higher elevation. Different positions, densities and viscosities of weak rheological layers have not only different surface expressions but also affect the fault geometry in the subsurface. For instance, rheological weak material in the bottom layer results in stretching when experiment reaches a certain displacement and a buildup of a less segmented, wide positive flower structure. At the surface, a wide fault valley in the middle of the fault zone is the reflection of stretching along the velocity discontinuity at depth. In models with a
Effects of anisosmotic stress on cardiac muscle cell length, diameter, area, and sarcomere length
NASA Technical Reports Server (NTRS)
Tanaka, R.; Barnes, M. A.; Cooper, G. 4th; Zile, M. R.
1996-01-01
The purpose of this study was to examine the effects of anisosmotic stress on adult mammalian cardiac muscle cell (cardiocyte) size. Cardiocyte size and sarcomere length were measured in cardiocytes isolated from 10 normal rats and 10 normal cats. Superfusate osmolarity was decreased from 300 +/- 6 to 130 +/- 5 mosM and increased to 630 +/- 8 mosM. Cardiocyte size and sarcomere length increased progressively when osmolarity was decreased, and there were no significant differences between cat and rat cardiocytes with respect to percent change in cardiocyte area or diameter; however, there were significant differences in cardiocyte length (2.8 +/- 0.3% in cat vs. 6.1 +/- 0.3% in rat, P < 0.05) and sarcomere length (3.3 +/- 0.3% in cat vs. 6.1 +/- 0.3% in rat, P < 0.05). To determine whether these species-dependent differences in length were related to diastolic interaction of the contractile elements or differences in relative passive stiffness, cardiocytes were subjected to the osmolarity gradient 1) during treatment with 7 mM 2,3-butanedione monoxime (BDM), which inhibits cross-bridge interaction, or 2) after pretreatment with 1 mM ethylene glycol-bis(beta-aminoethyl ether)-N, N,N',N'-tetraacetic acid (EGTA), a bivalent Ca2+ chelator. Treatment with EGTA or BDM abolished the differences between cat and rat cardiocytes. Species-dependent differences therefore appeared to be related to the degree of diastolic cross-bridge association and not differences in relative passive stiffness. In conclusion, the osmolarity vs. cell size relation is useful in assessing the cardiocyte response to anisosmotic stress and may in future studies be useful in assessing changes in relative passive cardiocyte stiffness produced by pathological processes.
Meson-Baryon Scattering Lengths from Mixed-Action Lattice QCD
Beane, S; Detmold, W; Luu, T; Orginos, K; Parreno, A; Torok, A; Walker-Loud, A
2009-06-30
The {pi}{sup +}{Sigma}{sup +}, {pi}{sup +}{Xi}{sup 0}, K{sup +}p, K{sup +}n, {bar K}{sup 0}{Sigma}{sup +}, and {bar K}{sup 0}{Xi}{sup 0} scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks is used to perform the chiral extrapolations. We find no convergence for the kaon-baryon processes in the three-flavor chiral expansion. Using the two-flavor chiral expansion, we find a{sub {pi}{sup +}{Sigma}{sup +}} = -0.197 {+-} 0.017 fm, and a{sub {pi}{sup +}{Xi}{sup 0}} = -0.098 {+-} 0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.