Science.gov

Sample records for extrapolation length

  1. Infrared length scale and extrapolations for the no-core shell model

    NASA Astrophysics Data System (ADS)

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.

    2015-06-01

    We precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A -body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3 (A -1 ) -dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound states of 4He,6He,6Li , and 7Li . We also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.

  2. Infrared length scale and extrapolations for the no-core shell model

    DOE PAGES

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.

    2015-06-03

    In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less

  3. Infrared length scale and extrapolations for the no-core shell model

    SciTech Connect

    Wendt, K. A.; Forssén, C.; Papenbrock, T.; Sääf, D.

    2015-06-03

    In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound states of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.

  4. Determination of Extrapolation Distance With Pressure Signatures Measured at Two to Twenty Span Lengths From Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil S.

    2006-01-01

    A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.

  5. Interspecies Extrapolation

    EPA Science Inventory

    Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...

  6. Biosimilars: Extrapolation for oncology.

    PubMed

    Curigliano, Giuseppe; O'Connor, Darran P; Rosenberg, Julie A; Jacobs, Ira

    2016-08-01

    A biosimilar is a biologic that is highly similar to a licensed biologic (the reference product) in terms of purity, safety and efficacy. If the reference product is licensed to treat multiple therapeutic indications, extrapolation of indications, i.e., approval of a biosimilar for use in an indication held by the reference product but not directly studied in a comparative clinical trial with the biosimilar, may be possible but has to be scientifically justified. Here, we describe the data required to establish biosimilarity and emphasize that indication extrapolation is based on scientific principles and known mechanism of action. PMID:27354233

  7. Ecotoxicological effects extrapolation models

    SciTech Connect

    Suter, G.W. II

    1996-09-01

    One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.

  8. The Extrapolation of Elementary Sequences

    NASA Technical Reports Server (NTRS)

    Laird, Philip; Saul, Ronald

    1992-01-01

    We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.

  9. Extrapolation methods for vector sequences

    NASA Technical Reports Server (NTRS)

    Smith, David A.; Ford, William F.; Sidi, Avram

    1987-01-01

    This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.

  10. Infrared extrapolations for atomic nuclei

    SciTech Connect

    Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; Wendt, Kyle A.

    2015-01-01

    Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.

  11. Infrared extrapolations for atomic nuclei

    DOE PAGES

    Furnstahl, R. J.; Hagen, Gaute; Papenbrock, Thomas F.; Wendt, Kyle A.

    2015-01-01

    Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, that are well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertaintymore » quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.« less

  12. Wildlife toxicity extrapolations: Dose metric

    SciTech Connect

    Fairbrother, A.; Berg, M. van den

    1995-12-31

    Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when trying to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by examining which dose metric to use for threshold determination and interspecific extrapolation, Since wild animals are exposed to environmental contaminants primarily through ingestion, should threshold values be expressed as amount of chemical in the diet (e.g., ppm) or as a body weight-adjusted dose (mg/kg/day)? Which of these two approaches is most relevant for ecological risk assessment decision making? Which is best for interspecific extrapolations? Converting from one metric to the other can compound uncertainty if the actual consumption rates of a species is unknown. How should this be dealt with? Is it of sufficient magnitude to be of concern?

  13. A brief survey of extrapolation quadrature.

    SciTech Connect

    Lyness, J. N.; Mathematics and Computer Science

    2000-07-01

    This is a short precis of a presentation on some of the recent advances in the area of extrapolation quadrature given at David Elliott's 65th birthday conference in Hobart in February 1997. Since the dawn of mathematics, historians and others have found many isolated instances of extrapolation being used in numerical calculation. However, the first serious proponent seems to have been Richardson (1923). His technique, also known as the 'deferred approach to the limit,' can be applied to the numerical evaluation of any quantity L, which can be defined as a limit as h approaches zero of an approximation L(h) when this L(h) has an expansion of the form L(h) = L + a{sub 1}h + a{sub 2}h{sup 2} + {hor_ellipsis} + a{sub r}h{sup r} + O(h{sup r+1}). In other words, the discretization error L(h) - L has a power series expansion in the parameter (usually a step length) h. Richardson suggested his technique particularly for large calculations. Richardson's technique comprised evaluating several relatively poor approximations based on different moderate values of h, and then extrapolating these values to obtain an approximately for L(0). This was proposed as an alternative to using a single, much smaller, value of h. During the subsequent 25 years, Richardson's approach was consistently ignored or misunderstood in environments where the analysis was available and, where in retrospect, the method would have been powerful, But, in the second half of the twentieth century, Richardson's idea has been widely exploited in several numerical areas. Many expansions that can be used for extrapolation have been discovered, some of which are displayed here. In the discipline of numerical quadrature, this body of theory is sometimes referred to as extrapolation quadrature. This theory has several aspects. The first, dealt with in this talk, is the establishment of the expansion. But also of significant importance are equations relating to is use: in particular, selecting which values of h

  14. Extrapolation limitations of multilayer feedforward neural networks

    NASA Technical Reports Server (NTRS)

    Haley, Pamela J.; Soloway, Donald

    1992-01-01

    The limitations of backpropagation used as a function extrapolator were investigated. Four common functions were used to investigate the network's extrapolation capability. The purpose of the experiment was to determine whether neural networks are capable of extrapolation and, if so, to determine the range for which networks can extrapolate. The authors show that neural networks cannot extrapolate and offer an explanation to support this result.

  15. Wildlife toxicity extrapolations: Measurement endpoints

    SciTech Connect

    Fairbrother, A.; Berg, M. van den

    1995-12-31

    Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to ail organisms of interest. This is a particularly acute problem when trying to estimate hazard to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by examining which are the appropriate measurement endpoints. Should only mortality, growth, or reproductive endpoints be used? Since toxicity threshold values may be used to make management decisions, should values related to each measurement endpoint be presented to allow the risk assessor to choose the measurement endpoint most relevant to the assessment questions being asked, or is a standard approach that uses the lowest value that causes a toxicologic response in any system of the animal a more appropriate, conservative estimate?

  16. Aspects of SU(3) baryon extrapolation

    SciTech Connect

    Young, R. D.

    2009-12-17

    We report on a recent chiral extrapolation, based on an SU(3) framework, of octet baryon masses calculated in 2+1-flavour lattice QCD. Here we further clarify the form of the extrapolation, the estimation of the infinite-volume limit, the extracted low-energy constants and the corrections in the strange-quark mass.

  17. AXES OF EXTRAPOLATION IN RISK ASSESSMENTS

    EPA Science Inventory

    Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...

  18. Builtin vs. auxiliary detection of extrapolation risk.

    SciTech Connect

    Munson, Miles Arthur; Kegelmeyer, W. Philip,

    2013-02-01

    A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.

  19. Endangered species toxicity extrapolation using ICE models

    EPA Science Inventory

    The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...

  20. Extrapolation procedures in Mott electron polarimetry

    NASA Technical Reports Server (NTRS)

    Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.

    1992-01-01

    In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.

  1. Essentially nonoscillatory (ENO) reconstructions via extrapolation

    NASA Technical Reports Server (NTRS)

    Suresh, Ambady; Jorgenson, Philip C. E.

    1995-01-01

    In this paper, the algorithm for determining the stencil of a one-dimensional Essentially Nonoscillatory (ENO) reconstruction scheme on a uniform grid is reinterpreted as being based on extrapolation. This view leads to another extension of ENO reconstruction schemes to two-dimensional unstructured triangular meshes. The key idea here is to select several cells of the stencil in one step based on extrapolation rather than one cell at a time. Numerical experiments confirm that the new scheme yields sharp nonoscillatory reconstructions and that it is about five times faster than previous schemes.

  2. Motion Extrapolation in the Central Fovea

    PubMed Central

    Shi, Zhuanghua; Nijhawan, Romi

    2012-01-01

    Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination). A recent “correction-for-extrapolation” hypothesis suggests that the absence of forward shifts is caused by sensory signals representing ‘failed’ predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea. PMID:22438976

  3. Extrapolated implicit-explicit time stepping.

    SciTech Connect

    Constantinescu, E. M.; Sandu, A.; Mathematics and Computer Science; Virginia Polytechnic Inst. and State Univ.

    2010-01-01

    This paper constructs extrapolated implicit-explicit time stepping methods that allow one to efficiently solve problems with both stiff and nonstiff components. The proposed methods are based on Euler steps and can provide very high order discretizations of ODEs, index-1 DAEs, and PDEs in the method-of-lines framework. Implicit-explicit schemes based on extrapolation are simple to construct, easy to implement, and straightforward to parallelize. This work establishes the existence of perturbed asymptotic expansions of global errors, explains the convergence orders of these methods, and studies their linear stability properties. Numerical results with stiff ODE, DAE, and PDE test problems confirm the theoretical findings and illustrate the potential of these methods to solve multiphysics multiscale problems.

  4. Chiral extrapolation of SU(3) amplitudes

    SciTech Connect

    Ecker, Gerhard

    2011-05-23

    Approximations of chiral SU(3) amplitudes at NNLO are proposed to facilitate the extrapolation of lattice data to the physical meson masses. Inclusion of NNLO terms is essential for investigating convergence properties of chiral SU(3) and for determining low-energy constants in a controllable fashion. The approximations are tested with recent lattice data for the ratio of decay constants F{sub K}/F{sub {pi}}.

  5. Surface dose measurement using TLD powder extrapolation

    SciTech Connect

    Rapley, P. . E-mail: rapleyp@tbh.net

    2006-10-01

    Surface/near-surface dose measurements in therapeutic x-ray beams are important in determining the dose to the dermal and epidermal skin layers during radiation treatment. Accurate determination of the surface dose is a difficult but important task for proper treatment of patients. A new method of measuring surface dose in phantom through extrapolation of readings from various thicknesses of thermoluminescent dosimeter (TLD) powder has been developed and investigated. A device was designed, built, and tested that provides TLD powder thickness variation to a minimum thickness of 0.125 mm. Variations of the technique have been evaluated to optimize precision with consideration of procedural ease. Results of this study indicate that dose measurements (relative to D{sub max}) in regions of steep dose gradient in the beam axis direction are possible with a precision (2 standard deviations [SDs]) as good as {+-} 1.2% using the technique. The dosimeter was developed and evaluated using variation to the experimental method. A clinically practical procedure was determined, resulting in measured surface dose of 20.4 {+-} 2% of the D{sub max} dose for a 10 x 10 cm{sup 2}, 80-cm source-to-surface distance (SSD), Theratron 780 Cobalt-60 ({sup 60}C) beam. Results obtained with TLD powder extrapolation compare favorably to other methods presented in the literature. The TLD powder extrapolation tool has been used clinically at the Northwestern Ontario Regional Cancer Centre (NWORCC) to measure surface dose effects under a number of conditions. Results from these measurements are reported. The method appears to be a simple and economical tool for surface dose measurement, particularly for facilities with TLD powder measurement capabilities.

  6. Wildlife toxicity extrapolations: NOAEL versus LOAEL

    SciTech Connect

    Fairbrother, A.; Berg, M. van den

    1995-12-31

    Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. One is then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when trying to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. This section continues the debate by six panelists of the ``correct`` approach for determining wildlife toxicity thresholds by debating which toxicity value should be used for setting threshold criteria. Should the lowest observable effect level (LOAEL) be used or is it more appropriate to use the no observable effect level (NOAEL)? What are the short-comings of using either of these point estimates? Should a ``benchmark`` approach, similar to that proposed for human health risk assessments, be used instead, where an EC{sub 5} or EC{sub 10} and associated confidence limits are determined and then divided by a safety factor? How should knowledge of the slope of the dose-response curve be incorporated into determination of toxicity threshold values?

  7. Extrapolating Solar Dynamo Models Throughout the Heliosphere

    NASA Astrophysics Data System (ADS)

    Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.

    2014-12-01

    There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.

  8. Extrapolation methods for dynamic partial differential equations

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1978-01-01

    Several extrapolation procedures are presented for increasing the order of accuracy in time for evolutionary partial differential equations. These formulas are based on finite difference schemes in both the spatial and temporal directions. On practical grounds the methods are restricted to schemes that are fourth order in time and either second, fourth or sixth order in space. For hyperbolic problems the second order in space methods are not useful while the fourth order methods offer no advantage over the Kreiss-Oliger method unless very fine meshes are used. Advantages are first achieved using sixth order methods in space coupled with fourth order accuracy in time. Computational results are presented confirming the analytic discussions.

  9. The Role of Motion Extrapolation in Amphibian Prey Capture

    PubMed Central

    2015-01-01

    Sensorimotor delays decouple behaviors from the events that drive them. The brain compensates for these delays with predictive mechanisms, but the efficacy and timescale over which these mechanisms operate remain poorly understood. Here, we assess how prediction is used to compensate for prey movement that occurs during visuomotor processing. We obtained high-speed video records of freely moving, tongue-projecting salamanders catching walking prey, emulating natural foraging conditions. We found that tongue projections were preceded by a rapid head turn lasting ∼130 ms. This motor lag, combined with the ∼100 ms phototransduction delay at photopic light levels, gave a ∼230 ms visuomotor response delay during which prey typically moved approximately one body length. Tongue projections, however, did not significantly lag prey position but were highly accurate instead. Angular errors in tongue projection accuracy were consistent with a linear extrapolation model that predicted prey position at the time of tongue contact using the average prey motion during a ∼175 ms period one visual latency before the head movement. The model explained successful strikes where the tongue hit the fly, and unsuccessful strikes where the fly turned and the tongue hit a phantom location consistent with the fly's earlier trajectory. The model parameters, obtained from the data, agree with the temporal integration and latency of retinal responses proposed to contribute to motion extrapolation. These results show that the salamander predicts future prey position and that prediction significantly improves prey capture success over a broad range of prey speeds and light levels. SIGNIFICANCE STATEMENT Neural processing delays cause actions to lag behind the events that elicit them. To cope with these delays, the brain predicts what will happen in the future. While neural circuits in the retina and beyond have been suggested to participate in such predictions, few behaviors have been

  10. MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1994-01-01

    The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been

  11. Frequency extrapolation by nonconvex compressive sensing

    SciTech Connect

    Chartrand, Rick; Sidky, Emil Y; Pan, Xiaochaun

    2010-12-03

    Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.

  12. Extrapolation of acute toxicity across bee species.

    PubMed

    Thompson, Helen

    2016-10-01

    In applying cross-species extrapolation safety factors from honeybees to other bee species, some basic principles of toxicity have not been included, for example, the importance of body mass in determining a toxic dose. The present study re-analyzed published toxicity data, taking into account the reported mass of the individuals in the identified species. The analysis demonstrated a shift to the left in the distribution of sensitivity of honeybees relative to 20 other bee species when body size is taken into account, with the 95(th) percentile for contact and oral toxicity reducing from 10.7 (based on μg/individual bee) to 5.0 (based on μg/g bodyweight). Such an approach results in the real drivers of species differences in sensitivity-such as variability in absorption, distribution, metabolism, and excretion in and target-receptor binding-being more realistically reflected in the revised safety factor. Body mass can also be used to underpin the other parameter of first-tier risk assessment, that is, exposure. However, the key exposure factors that cannot be predicted from bodyweight are the effects of ecology and behavior of the different species on exposure to a treated crop. Further data are required to understand the biology of species associated with agricultural crops and the potential consequences of effects on individuals at the levels of the colony or bee populations. This information will allow the development of appropriate higher-tier refinement of risk assessments and testing strategies rather than extensive additional toxicity testing at Tier 1. Integr Environ Assess Manag 2016;12:622-626. © 2015 SETAC. PMID:26595163

  13. Extrapolation of acute toxicity across bee species.

    PubMed

    Thompson, Helen

    2016-10-01

    In applying cross-species extrapolation safety factors from honeybees to other bee species, some basic principles of toxicity have not been included, for example, the importance of body mass in determining a toxic dose. The present study re-analyzed published toxicity data, taking into account the reported mass of the individuals in the identified species. The analysis demonstrated a shift to the left in the distribution of sensitivity of honeybees relative to 20 other bee species when body size is taken into account, with the 95(th) percentile for contact and oral toxicity reducing from 10.7 (based on μg/individual bee) to 5.0 (based on μg/g bodyweight). Such an approach results in the real drivers of species differences in sensitivity-such as variability in absorption, distribution, metabolism, and excretion in and target-receptor binding-being more realistically reflected in the revised safety factor. Body mass can also be used to underpin the other parameter of first-tier risk assessment, that is, exposure. However, the key exposure factors that cannot be predicted from bodyweight are the effects of ecology and behavior of the different species on exposure to a treated crop. Further data are required to understand the biology of species associated with agricultural crops and the potential consequences of effects on individuals at the levels of the colony or bee populations. This information will allow the development of appropriate higher-tier refinement of risk assessments and testing strategies rather than extensive additional toxicity testing at Tier 1. Integr Environ Assess Manag 2016;12:622-626. © 2015 SETAC.

  14. CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS

    EPA Science Inventory

    Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...

  15. An algorithm for a generalization of the Richardson extrapolation process

    NASA Technical Reports Server (NTRS)

    Ford, William F.; Sidi, Avram

    1987-01-01

    The paper presents a recursive method, designated the W exp (m)-algorithm, for implementing a generalization of the Richardson extrapolation process. Compared to the direct solution of the linear sytems of equations defining the extrapolation procedure, this method requires a small number of arithmetic operations and very little storage. The technique is also applied to solve recursively the coefficient problem associated with the rational approximations obtained by applying a d-transformation to power series. In the course of development a new recursive algorithm for implementing a very general extrapolation procedure is introduced, for solving the same problem. A FORTRAN program for the W exp (m)-algorithm is also appended.

  16. 3D Hail Size Distribution Interpolation/Extrapolation Algorithm

    NASA Technical Reports Server (NTRS)

    Lane, John

    2013-01-01

    Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.

  17. Extrapolation technique pitfalls in asymmetry measurements at colliders

    NASA Astrophysics Data System (ADS)

    Colletti, Katrina; Hong, Ziqing; Toback, David; Wilson, Jonathan S.

    2016-09-01

    Asymmetry measurements are common in collider experiments and can sensitively probe particle properties. Typically, data can only be measured in a finite region covered by the detector, so an extrapolation from the visible asymmetry to the inclusive asymmetry is necessary. Often a constant multiplicative factor is advantageous for the extrapolation and this factor can be readily determined using simulation methods. However, there is a potential, avoidable pitfall involved in the determination of this factor when the asymmetry in the simulated data sample is small. We find that to obtain a reliable estimate of the extrapolation factor, the number of simulated events required rises as the inverse square of the simulated asymmetry; this can mean that an unexpectedly large sample size is required when determining the extrapolation factor.

  18. Ecotoxicological effects assessment: A comparison of several extrapolation procedures

    SciTech Connect

    Okkerman, P.C.; v.d. Plassche, E.J.; Slooff, W.; Van Leeuwen, C.J.; Canton, J.H. , Bilthoven )

    1991-04-01

    In the future, extrapolation procedures will become more and more important for the effect assessment of compounds in aquatic systems. For achieving a reliable method these extrapolation procedures have to be evaluated thoroughly. As a first step three extrapolation procedures are compared by means of two sets of data, consisting of (semi)chronic and acute toxicity test results for 11 aquatic species and 8 compounds. Because of its statistical basis the extrapolation procedure of Van Straalen and Denneman is preferred over the procedures of the EPA and Stephan et al. The results of the calculations showed that lower numbers of toxicity data increase the chance of underestimating the risk of a compound. Therefore it is proposed to extend the OECD guidelines for algae, Daphnia, and fish with chronic (aquatic) toxicity tests for more species of different taxonomic groups.

  19. Role of animal studies in low-dose extrapolation

    SciTech Connect

    Fry, R.J.M.

    1981-01-01

    Current data indicate that in the case of low-LET radiation linear, extrapolation from data obtained at high doses appears to overestimate the risk at low doses to a varying degree. In the case of high-LET radiation, extrapolation from data obtained at doses as low as 40 rad (0.4 Gy) is inappropriate and likely to result in an underestimate of the risk.

  20. Wildlife toxicity extrapolations: Allometry versus physiologically-based toxicokinetics

    SciTech Connect

    Fairbrother, A.; Berg, M. van den

    1995-12-31

    Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. The authors are then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when trying to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. The question arises of how interspecific extrapolations should be made. Should extrapolations be limited to animals within the same class, order, family or genus? Alteratively, should extrapolations be made along trophic levels or physiologic similarities rather than by taxonomic classification? In other words, is an avian carnivore more like a mammalian carnivore or an avian granivore in its response to a toxic substance? Can general rules be set or does the type of extrapolation depend upon the class of chemical and its mode of uptake and toxicologic effect?

  1. Testing the hydrologic utility of geologic frameworks for extrapolating hydraulic properties across large scales

    NASA Astrophysics Data System (ADS)

    Mirus, B. B.; Halford, K. J.; Sweetkind, D. S.; Fenelon, J.

    2014-12-01

    The utility of geologic frameworks for extrapolating hydraulic conductivities to length scales that are commensurate with hydraulic data has been assessed at the Nevada National Security Site in highly-faulted volcanic rocks. Observed drawdowns from eight, large-scale, aquifer tests on Pahute Mesa provided the necessary constraints to test assumed relations between hydraulic conductivity and interpretations of the geology. The investigated volume of rock encompassed about 40 cubic miles where drawdowns were detected more than 2 mi from pumping wells and traversed major fault structures. Five sets of hydraulic conductivities at about 500 pilot points were estimated by simultaneously interpreting all aquifer tests with a different geologic framework for each set. Each geologic framework was incorporated as prior information that assumed homogeneous hydraulic conductivities within each geologic unit. Complexity of the geologic frameworks ranged from an undifferentiated mass of rock with a single unit to 14 unique geologic units. Analysis of the model calibrations showed that a maximum of four geologic units could be differentiated where each was hydraulically unique as defined by the mean and standard deviation of log-hydraulic conductivity. Consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation were evaluated qualitatively with maps of transmissivity. Distributions of transmissivity were similar within the investigated extents regardless of geologic framework except for a transmissive streak along a fault in the Fault-Structure framework. Extrapolation was affected by underlying geologic frameworks where the variability of transmissivity increased as the number of units increased.

  2. Implicit extrapolation methods for multilevel finite element computations

    SciTech Connect

    Jung, M.; Ruede, U.

    1994-12-31

    The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.

  3. Rule-based extrapolation: a continuing challenge for exemplar models.

    PubMed

    Denton, Stephen E; Kruschke, John K; Erickson, Michael A

    2008-08-01

    Erickson and Kruschke (1998, 2002) demonstrated that in rule-plus-exception categorization, people generalize category knowledge by extrapolating in a rule-like fashion, even when they are presented with a novel stimulus that is most similar to a known exception. Although exemplar models have been found to be deficient in explaining rule-based extrapolation, Rodrigues and Murre (2007) offered a variation of an exemplar model that was better able to account for such performance. Here, we present the results of a new rule-plus-exception experiment that yields rule-like extrapolation similar to that of previous experiments, and yet the data are not accounted for by Rodrigues and Murre's augmented exemplar model. Further, a hybrid rule-and-exemplar model is shown to better describe the data. Thus, we maintain that rule-plus-exception categorization continues to be a challenge for exemplar-only models. PMID:18792504

  4. Chiral Extrapolation of Lattice Data for Heavy Meson Hyperfine Splittings

    SciTech Connect

    X.-H. Guo; P.C. Tandy; A.W. Thomas

    2006-03-01

    We investigate the chiral extrapolation of the lattice data for the light-heavy meson hyperfine splittings D*-D and B*-B to the physical region for the light quark mass. The chiral loop corrections providing non-analytic behavior in m{sub {pi}} are consistent with chiral perturbation theory for heavy mesons. Since chiral loop corrections tend to decrease the already too low splittings obtained from linear extrapolation, we investigate two models to guide the form of the analytic background behavior: the constituent quark potential model, and the covariant model of QCD based on the ladder-rainbow truncation of the Dyson-Schwinger equations. The extrapolated hyperfine splittings remain clearly below the experimental values even allowing for the model dependence in the description of the analytic background.

  5. Efficient implementation of minimal polynomial and reduced rank extrapolation methods

    NASA Technical Reports Server (NTRS)

    Sidi, Avram

    1990-01-01

    The minimal polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) are two effective techniques that have been used in accelerating the convergence of vector sequences, such as those that are obtained from iterative solution of linear and nonlinear systems of equation. Their definitions involve some linear least squares problems, and this causes difficulties in their numerical implementation. Timewise efficient and numerically stable implementations for MPE and RRE are developed. A computer program written in FORTRAN 77 is also appended and applied to some model problems.

  6. MULTIPLE SOLVENT EXPOSURE IN HUMANS: CROSS-SPECIES EXTRAPOLATIONS

    EPA Science Inventory

    Multiple Solvent Exposures in Humans:
    Cross-Species Extrapolations
    (Future Research Plan)

    Vernon A. Benignus1, Philip J. Bushnell2 and William K. Boyes2

    A few solvents can be safely studied in acute experiments in human subjects. Data exist in rats f...

  7. Properties of infrared extrapolations in a harmonic oscillator basis

    NASA Astrophysics Data System (ADS)

    Coon, Sidney A.; Kruse, Michael K. G.

    2016-02-01

    The success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful, not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.

  8. Analytic Approximations for the Extrapolation of Lattice Data

    SciTech Connect

    Masjuan, Pere

    2010-12-22

    We present analytic approximations of chiral SU(3) amplitudes for the extrapolation of lattice data to the physical masses and the determination of Next-to-Next-to-Leading-Order low-energy constants. Lattice data for the ratio F{sub K}/F{sub {pi}} is used to test the method.

  9. An objective analysis technique for extrapolating tidal fields

    NASA Technical Reports Server (NTRS)

    Sanchez, B. V.

    1984-01-01

    An interpolation technique which allows accurate extrapolation of tidal height fields in the ocean basins by making use of selected satellite altimetry measurements and/or conventional gauge measurements was developed and tested. A normal mode solution for the Atlantic and Indian Oceans was obtained by means of a finite difference grid. Normal mode amplitude maps are presented.

  10. Extrapolation of supersymmetry-breaking parameters to high energy scales

    SciTech Connect

    Stephen P Martin

    2002-11-07

    The author studies how well one can extrapolate the values of supersymmetry-breaking parameters to very high energy scales using future data from the Large Hadron Collider and an e{sup +}e{sup -} linear collider. He considers tests of the unification of squark and slepton masses in supergravity-inspired models. In gauge-mediated supersymmetry breaking models, he assess the ability to measure the mass scales associated with supersymmetry breaking. He also shows that it is possible to get good constraints on a scalar cubic stop-stop-Higgs couplings near the high scale. Different assumptions with varying levels of optimism about the accuracy of input parameter measurements are made, and their impact on the extrapolated results is documented.

  11. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations. PMID:17025782

  12. Extrapolated gradientlike algorithms for molecular dynamics and celestial mechanics simulations.

    PubMed

    Omelyan, I P

    2006-09-01

    A class of symplectic algorithms is introduced to integrate the equations of motion in many-body systems. The algorithms are derived on the basis of an advanced gradientlike decomposition approach. Its main advantage over the standard gradient scheme is the avoidance of time-consuming evaluations of force gradients by force extrapolation without any loss of precision. As a result, the efficiency of the integration improves significantly. The algorithms obtained are analyzed and optimized using an error-function theory. The best among them are tested in actual molecular dynamics and celestial mechanics simulations for comparison with well-known nongradient and gradient algorithms such as the Störmer-Verlet, Runge-Kutta, Cowell-Numerov, Forest-Ruth, Suzuki-Chin, and others. It is demonstrated that for moderate and high accuracy, the extrapolated algorithms should be considered as the most efficient for the integration of motion in molecular dynamics simulations.

  13. Population movement studied at microscale: experience and extrapolation.

    PubMed

    Chapman, M

    1987-12-01

    The context for this paper is the nature of generalization deriving from the study of particular cases of human behavior. With specific reference to field research on population movement conducted among individuals, households, small groups, and village communities in 3rd world societies, it challenges the convention that both generalization and extrapolation are based inevitably and exclusively on the number of events subject to examination. An evaluation is made of the methodological aspects of 4 different studies of population mobility at microscale, undertaken between 1965 and 1977 in the Solomon Islands and northwest Thailand. On this basis, integrated field designs that incorporate a range of intersecting instruments are favored for their technical flexibility and logical strength. With case studies of 3rd world villages, market centers, and urban neighborhoods, generalization and extrapolation is based on depth of understanding and power of theoretical connections. PMID:12315702

  14. Specification of mesospheric density, pressure, and temperature by extrapolation

    NASA Technical Reports Server (NTRS)

    Graves, M. E.; Low, Y. S.; Miller, A. H.

    1973-01-01

    A procedure is presented which employs an extrapolation technique to obtain estimates of density, pressure, and temperature up to 90 km from 52 km data. The resulting errors are investigated. The procedure is combined with a special temperature interpolation method around the stratopause to produce such estimates at eight levels between 36 km and 90 km from North American sectional chart data at 5, 2, and 0.4 mb. Fifth charts were processed to obtain mean values and standard deviations at grid points for midseasonal months from 1964 to 1966. The mean values were compared with Groves' model, and internal consistency tests were performed upon the statistics. Through application of the extrapolation procedure, the atmospheric structure of a stratospheric warming event is studied.

  15. A regularization method for extrapolation of solar potential magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  16. Extrapolation techniques applied to matrix methods in neutron diffusion problems

    NASA Technical Reports Server (NTRS)

    Mccready, Robert R

    1956-01-01

    A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.

  17. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    NASA Astrophysics Data System (ADS)

    Mirus, Benjamin B.; Halford, Keith; Sweetkind, Don; Fenelon, Joe

    2016-08-01

    The suitability of geologic frameworks for extrapolating hydraulic conductivity ( K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.

  18. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    DOE PAGES

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-02-18

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less

  19. Testing the suitability of geologic frameworks for extrapolating hydraulic properties across regional scales

    USGS Publications Warehouse

    Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.

    2016-01-01

    The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.

  20. An efficient extrapolation to the (T)/CBS limit

    NASA Astrophysics Data System (ADS)

    Ranasinghe, Duminda S.; Barnes, Ericka C.

    2014-05-01

    We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or "Wes1T-2Z") and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or "Wes1T-3Z"). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mEh, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mEh, ±2.37 mEh, and ±5.80 mEh, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C6H5Me+, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.

  1. Extrapolation and direct matching mediate anticipation in infancy.

    PubMed

    Green, Dorota; Kochukhova, Olga; Gredebäck, Gustaf

    2014-02-01

    Why are infants able to anticipate occlusion events and other people's actions but not the movement of self-propelled objects? This study investigated infant and adult anticipatory gaze shifts during observation of self-propelled objects and human goal-directed actions. Six-month-old infants anticipated self-propelled balls but not human actions. This demonstrates that different processes mediate the ability to anticipate human actions (direct matching) versus self-propelled objects (extrapolation).

  2. Limitations on wind-tunnel pressure signature extrapolation

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Darden, Christine M.

    1992-01-01

    Analysis of some recent experimental sonic boom data has revived the hypothesis that there is a closeness limit to the near-field separation distance from which measured wind tunnel pressure signatures can be extrapolated to the ground as though generated by a supersonic-cruise aircraft. Geometric acoustic theory is used to derive an estimate of this distance and the sample data is used to provide a preliminary indication of practical separation distance values.

  3. An efficient extrapolation to the (T)/CBS limit

    SciTech Connect

    Ranasinghe, Duminda S.; Barnes, Ericka C.

    2014-05-14

    We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or “Wes1T-2Z”) and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or “Wes1T-3Z”). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mE{sub h}, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mE{sub h}, ±2.37 mE{sub h}, and ±5.80 mE{sub h}, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C{sub 6}H{sub 5}Me{sup +}, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.

  4. A simple extrapolation of thermodynamic perturbation theory to infinite order

    SciTech Connect

    Ghobadi, Ahmadreza F.; Elliott, J. Richard

    2015-09-21

    Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A{sub 3}/A{sub 2}, where A{sub i} is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT)

  5. Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil

    2004-01-01

    A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.

  6. CORONAL ALFVEN SPEED DETERMINATION: CONSISTENCY BETWEEN SEISMOLOGY USING AIA/SDO TRANSVERSE LOOP OSCILLATIONS AND MAGNETIC EXTRAPOLATION

    SciTech Connect

    Verwichte, E.; Foullon, C.; White, R. S.; Van Doorsselaere, T.

    2013-04-10

    Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfven speed inside these loops. Through the period of oscillation and loop length, information about the Alfven speed inside each loop is deduced seismologically. This is compared with the Alfven speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfven speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.

  7. Smooth extrapolation of unknown anatomy via statistical shape models

    NASA Astrophysics Data System (ADS)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  8. Acute toxicity value extrapolation with fish and aquatic invertebrates

    USGS Publications Warehouse

    Buckler, Denny R.; Mayer, Foster L.; Ellersieck, Mark R.; Asfaw, Amha

    2005-01-01

    Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled “Ecological Risk Analysis” (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more

  9. Extrapolation of vertical target motion through a brief visual occlusion.

    PubMed

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects. PMID:19882150

  10. Chiral and continuum extrapolation of partially-quenched hadron masses

    SciTech Connect

    Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young

    2005-09-29

    Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement ({approx}1%) with the experimental value of M{sub {rho}} from the former approach. These results are extended to the case of the nucleon mass.

  11. Optical tomography by the temporally extrapolated absorbance method

    NASA Astrophysics Data System (ADS)

    Oda, Ichiro; Eda, Hideo; Tsunazawa, Yoshio; Takada, Michinosuke; Yamada, Yukio; Nishimura, Goro; Tamura, Mamoru

    1996-01-01

    The concept of the temporally extrapolated absorbance method (TEAM) for optical tomography of turbid media has been verified by fundamental experiments and image reconstruction. The TEAM uses the time-resolved spectroscopic data of the reference and object to provide projection data that are processed by conventional backprojection. Optical tomography images of a phantom consisting of axisymmetric double cylinders were experimentally obtained with the TEAM and time-gating and continuous-wave (CW) methods. The reconstructed TEAM images are compared with those obtained with the time-gating and CW methods and are found to have better spatial resolution.

  12. Nonlinear Force-Free Field Extrapolation of NOAA AR 0696

    NASA Astrophysics Data System (ADS)

    Thalmann, J. K.; Wiegelmann, T.

    2007-12-01

    We investigate the 3D coronal magnetic field structure of NOAA AR 0696 in the period of November 09-11, 2004, before and after an X2.5 flare (occurring around 02:13 UT on November 10, 2004). The coronal magnetic field dominates the structure of the solar corona and consequently plays a key role for the understanding of the initiation of flares. The most accurate presently available method to derive the coronal magnetic field is currently the nonlinear force-free field extrapolation from measurements of the photospheric magnetic field vector. These vector-magnetograms were processed from stokes I, Q, U, and V measurements of the Big Bear Solar Observatory and extrapolated into the corona with the nonlinear force-free optimization code developed by Wiegelmann (2004). We analyze the corresponding time series of coronal equilibria regarding topology changes of the 3D coronal magnetic field during the flare. Furthermore, quantities such as the temporal evolution of the magnetic energy and helicity are computed.

  13. Extrapolating Single Organic Ion Solvation Thermochemistry from Simulated Water Nanodroplets.

    PubMed

    Coles, Jonathan P; Houriez, Céline; Meot-Ner Mautner, Michael; Masella, Michel

    2016-09-01

    We compute the ion/water interaction energies of methylated ammonium cations and alkylated carboxylate anions solvated in large nanodroplets of 10 000 water molecules using 10 ns molecular dynamics simulations and an all-atom polarizable force-field approach. Together with our earlier results concerning the solvation of these organic ions in nanodroplets whose molecular sizes range from 50 to 1000, these new data allow us to discuss the reliability of extrapolating absolute single-ion bulk solvation energies from small ion/water droplets using common power-law functions of cluster size. We show that reliable estimates of these energies can be extrapolated from a small data set comprising the results of three droplets whose sizes are between 100 and 1000 using a basic power-law function of droplet size. This agrees with an earlier conclusion drawn from a model built within the mean spherical framework and paves the road toward a theoretical protocol to systematically compute the solvation energies of complex organic ions. PMID:27420562

  14. Extrapolation of critical Rayleigh values using static nodal integral methods

    SciTech Connect

    Wilson, G.L.; Rydin, R.A.

    1988-01-01

    The Benard problem is the study of the convective motion of a fluid in a rectangular cavity that is uniformly heated form below. Flow bifurcation in the cavity is a function of the Rayleigh number (Ra). The time-dependent nodal integral method (TDNIM) has been reported previously; its development leads to a set of 11 equations per node. The static nodal integral method (SNIM) was derived from the TDNIM by forcing the dependent variable at adjacent time steps (one of the velocity components or temperature) to take on the node integral average value. The paper summarizes the SNIM calculation of Ra for mesh sizes ranging from 4 x 4 to 24 x 24. The numerical calculation of Ra is within plus or minus one-half unit. The relative errors are calculated based on the obtained extrapolated value of Ra{sub best}* = 2584. The paper also summarizes three-point schemes used with increasingly finer mesh combinations. This approach avoids the contamination of the results with a coarse mesh; however, the calculation on n is very sensitive to small changes in the numerical values obtained for Ra*. In this approach, the extrapolated values quickly converge to Ra*{sub e} between 2583 and 2584 with n {approx}2.0 as desired, and give a best value of Ra*{sub best} = 2584.

  15. California's Proposition 65: extrapolating animal toxicity to humans.

    PubMed

    Kilgore, W W

    1990-01-01

    In 1986, the voters of California passed a law regarding the concept of extrapolating animal toxicity data to humans. The California Safe Drinking Water and Toxic Enforcement Act of 1986, known as Proposition 65, does five things: 1. It creates a list of chemicals (including a number of agricultural chemicals) known to cause cancer or reproductive toxicity; 2. It limits discharges of listed chemicals to drinking water sources; 3. It requires prior warning before exposure to listed chemicals by anyone in the course of doing business; 4. It creates a list of chemicals requiring testing for carcinogenicity or reproductive toxicity; and 5. It requires the Governor to consult with qualified experts (a 12-member "Scientific Advisory Panel" was appointed) as necessary to carry out his duties. This paper discusses the details and implications of this proposition. Areas of responsibility have been assigned. The definition of significant risk is being addressed. PMID:2248253

  16. Behavioral effects of carbon monoxide: Meta analyses and extrapolations

    SciTech Connect

    Benignus, V.A.

    1993-03-16

    In the absence of reliable data, the present work was performed to estimate the dose effect function of carboxyhemoglobin (COHb) on behavior in humans. By meta analysis, a COHb-behavior dose-effects functions was estimated for rats and corrected for effects of hypothermia (which accompanies COHb increases in rats but not in humans). Using pulmonary function models and blood-gas equations, equivalent COHb values were calculated for data in the literature on hypoxic hypoxia (HH) and behavior. Another meta analysis was performed to fit a dose-effects function to the equivalent-COHb data and to correct for the behavioral effects of hypocapnia (which usually occurs during HH but not with COHb elevation). The two extrapolations agreed closely and indicated that for healthy, sedentary persons, it would require 18-25% COHb to produce a 10% decrement in behavior. Confidence intervals were computed to characterize the uncertainty. Frequent reports of lower-level effects were discussed.

  17. Parallel 2D and 3D Prestack Depth Migration Using Recursive Kirchhoff Wavefield Extrapolation

    NASA Astrophysics Data System (ADS)

    Geiger, H. D.; Margrave, G. F.; Liu, K.

    2004-05-01

    Recursive Kirchhoff wavefield extrapolation in the space-frequency domain can be thought of as a simple convolutional filter that calculates a single output point at depth z+dz using a weighted summation of all input points within the extrapolator aperture at depth z. The desired velocity values for the extrapolator are the ones that provide the best approximation of the true phase (propagation time) of the seismic wavefield between the input points and the output point. Recursive Kirchhoff extrapolators can be designed to handle lateral variations in velocity in a number of ways: a PSPI-type (phase shift plus interpolation) extrapolator uses only the velocity at the output point, a NSPS-type (nonstationary phase shift) extrapolator uses the velocities at the input points; a SNPS-type (symmetric nonstationary phase shift) extrapolator incorporates two extrapolation steps of dz/2 where the first step uses the velocities at the input points (NSPS-type) and the second step uses the velocity at the output point (PSPI-type); while the Weyl-type extrapolator uses an average of the velocities between each input point and the output point. Here, we introduce the PAVG-type (slowness averaged) extrapolator, which uses velocity values calculated by an average of slowness along straight raypaths between each input point and the output point. Parallel 2D and 3D prestack depth migration algorithms have been coded in both MATLAB and C and tested on a small Linux cluster. A simple synthetic with a lateral step in velocity shows that the PAVG Kirchhoff extrapolator is very close to the exact desired response. Tests using the 2D Marmousi synthetic data set suggest that the extrapolator behaviour is only one of many considerations that must be addressed for accurate depth imaging. Other important considerations include preprocessing, aperture size, taper width, extrapolator stability, and imaging condition.

  18. Validation subset selections for extrapolation oriented QSPAR models.

    PubMed

    Szántai-Kis, Csaba; Kövesdi, István; Kéri, György; Orfi, László

    2003-01-01

    One of the most important features of QSPAR models is their predictive ability. The predictive ability of QSPAR models should be checked by external validation. In this work we examined three different types of external validation set selection methods for their usefulness in in-silico screening. The usefulness of the selection methods was studied in such a way that: 1) We generated thousands of QSPR models and stored them in 'model banks'. 2) We selected a final top model from the model banks based on three different validation set selection methods. 3) We predicted large data sets, which we called 'chemical universe sets', and calculated the corresponding SEPs. The models were generated from small fractions of the available water solubility data during a GA Variable Subset Selection procedure. The external validation sets were constructed by random selections, uniformly distributed selections or by perimeter-oriented selections. We found that the best performing models on the perimeter-oriented external validation sets usually gave the best validation results when the remaining part of the available data was overwhelmingly large, i.e., when the model had to make a lot of extrapolations. We also compared the top final models obtained from external validation set selection methods in three independent and different sizes of 'chemical universe sets'.

  19. Impact ejecta dynamics in an atmosphere - Experimental results and extrapolations

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D. E.

    1982-01-01

    It is noted that the impacts of 0.635-cm aluminum projectiles at 6 km/sec into fine pumice dust, at 1 atm, generate a ball of ionized gas behind an expanding curtain of upward moving ejecta. The gas ball forms a toroid which dissolves as it is driven along the interior of the ejecta curtain, by contrast to near-surface explosions in which a fireball envelops early-time crater growth. High frame rate Schlieren photographs show that the atmosphere at the base of the ejecta curtain is initially turbulent, but later forms a vortex. These experiments suggest that although small size ejecta may be decelerated by air drag, they are not simply lofted and suspended but become incorporated in an ejecta cloud that is controlled by air flow which is produced by the response of the atmosphere to the impact. The extrapolation of these results to large body impacts on the earth suggests such contrasts with laboratory experiments as a large quantity of impact-generated vapor, the supersonic advance of the ejecta curtain, the lessened effect of air drag due to the tenuous upper atmosphere, and the role of secondary cratering.

  20. Detail enhancement of blurred infrared images based on frequency extrapolation

    NASA Astrophysics Data System (ADS)

    Xu, Fuyuan; Zeng, Deguo; Zhang, Jun; Zheng, Ziyang; Wei, Fei; Wang, Tiedan

    2016-05-01

    A novel algorithm for enhancing the details of the blurred infrared images based on frequency extrapolation has been raised in this paper. Unlike other researchers' work, this algorithm mainly focuses on how to predict the higher frequency information based on the Laplacian pyramid separation of the blurred image. This algorithm uses the first level of the high frequency component of the pyramid of the blurred image to reverse-generate a higher, non-existing frequency component, and adds back to the histogram equalized input blurred image. A simple nonlinear operator is used to analyze the extracted first level high frequency component of the pyramid. Two critical parameters are participated in the calculation known as the clipping parameter C and the scaling parameter S. The detailed analysis of how these two parameters work during the procedure is figure demonstrated in this paper. The blurred image will become clear, and the detail will be enhanced due to the added higher frequency information. This algorithm has the advantages of computational simplicity and great performance, and it can definitely be deployed in the real-time industrial applications. We have done lots of experiments and gave illustrations of the algorithm's performance in this paper to convince its effectiveness.

  1. Interspecies Gene Name Extrapolation--A New Approach.

    PubMed

    Petric, Roxana Cojocneanu; Braicu, Cornelia; Bassi, Cristian; Pop, Laura; Taranu, Ionelia; Dragos, Nicolae; Dumitrascu, Dan; Negrini, Massimo; Berindan-Neagoe, Ioana

    2015-01-01

    The use of animal models has facilitated numerous scientific developments, especially when employing "omics" technologies to study the effects of various environmental factors on humans. Our study presents a new bioinformatics pipeline suitable when the generated microarray data from animal models does not contain the necessary human gene name annotation. We conducted single color gene expression microarray on duodenum and spleen tissue obtained from pigs which have been exposed to zearalenone and Escherichia coli contamination, either alone or combined. By performing a combination of file format modifications and data alignments using various online tools as well as a command line environment, we performed the pig to human gene name extrapolation with an average yield of 58.34%, compared to 3.64% when applying more simple methods. In conclusion, while online data analysis portals on their own are of great importance in data management and assessment, our new pipeline provided a more effective approach for a situation which can be frequently encountered by researchers in the "omics" era. PMID:26407293

  2. An empirical relationship for extrapolating sparse experimental lap joint data.

    SciTech Connect

    Segalman, Daniel Joseph; Starr, Michael James

    2010-10-01

    Correctly incorporating the influence of mechanical joints in built-up mechanical systems is a critical element for model development for structural dynamics predictions. Quality experimental data are often difficult to obtain and is rarely sufficient to determine fully parameters for relevant mathematical models. On the other hand, fine-mesh finite element (FMFE) modeling facilitates innumerable numerical experiments at modest cost. Detailed FMFE analysis of built-up structures with frictional interfaces reproduces trends among problem parameters found experimentally, but there are qualitative differences. Those differences are currently ascribed to the very approximate nature of the friction model available in most finite element codes. Though numerical simulations are insufficient to produce qualitatively correct behavior of joints, some relations, developed here through observations of a multitude of numerical experiments, suggest interesting relationships among joint properties measured under different loading conditions. These relationships can be generalized into forms consistent with data from physical experiments. One such relationship, developed here, expresses the rate of energy dissipation per cycle within the joint under various combinations of extensional and clamping load in terms of dissipation under other load conditions. The use of this relationship-though not exact-is demonstrated for the purpose of extrapolating a representative set of experimental data to span the range of variability observed from real data.

  3. Time-domain incident-field extrapolation technique based on the singularity-expansion method

    SciTech Connect

    Klaasen, J.J.

    1991-05-01

    In this report, a method presented to extrapolate measurements from Nuclear Electromagnetic Pulse (NEMP) assessments directly in the time domain. This method is based on a time-domain extrapolation function which is obtained from the Singularity Expansion Method representation of the measured incident field of the NEMP simulator. Once the time-domain extrapolation function is determined, the responses recorded during an assessment can be extrapolated simply by convolving them with the time domain extrapolation function. It is found that to obtain useful extrapolated responses, the incident field measurements needs to be made minimum phase; otherwise unbounded results can be obtained. Results obtained with this technique are presented, using data from actual assessments.

  4. An analysis of shock coalescence including three-dimensional effects with application to sonic boom extrapolation. Ph.D. Thesis - George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1984-01-01

    A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.

  5. Cross-species extrapolation of chemical effects: Challenges and new insights

    EPA Science Inventory

    One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...

  6. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation

    PubMed Central

    Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  7. Why do people appear not to extrapolate trajectories during multiple object tracking? A computational investigation.

    PubMed

    Zhong, Sheng-Hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I

    2014-01-01

    Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as "multiple object tracking," observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300

  8. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    NASA Astrophysics Data System (ADS)

    Fang, Jun; Gao, Xingyu; Song, Haifeng; Wang, Han

    2016-06-01

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn-Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.

  9. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics.

    PubMed

    Fang, Jun; Gao, Xingyu; Song, Haifeng; Wang, Han

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn-Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy. PMID:27369493

  10. Measuring Thermodynamic Length

    SciTech Connect

    Crooks, Gavin E

    2007-09-07

    Thermodynamic length is a metric distance between equilibrium thermodynamic states. Among other interesting properties, this metric asymptotically bounds the dissipation induced by a finite time transformation of a thermodynamic system. It is also connected to the Jensen-Shannon divergence, Fisher information, and Rao's entropy differential metric. Therefore, thermodynamic length is of central interestin understanding matter out of equilibrium. In this Letter, we will consider how to denethermodynamic length for a small system described by equilibrium statistical mechanics and how to measure thermodynamic length within a computer simulation. Surprisingly, Bennett's classic acceptance ratio method for measuring free energy differences also measures thermodynamic length.

  11. The evolution of σ γP with coherence length

    NASA Astrophysics Data System (ADS)

    Caldwell, Allen

    2016-07-01

    Assuming the form {σ }γ {{P}}\\propto {l}{λ {{eff}}} at fixed Q 2 for the behavior of the virtual-photon proton scattering cross section, where l is the coherence length of the photon fluctuations, it is seen that the extrapolated values of {σ }γ {{P}} for different Q 2 cross for l≈ {10}8 fm. It is argued that this behavior is not physical, and that the behavior of the cross sections must change before this coherence length l is reached. This could set the scale for the onset of saturation of the parton densities in the photon, and thereby saturation of parton densities in general.

  12. Controversies in Establishing Biosimilarity: Extrapolation of Indications and Global Labeling Practices.

    PubMed

    Ebbers, Hans C; Chamberlain, Paul

    2016-02-01

    The principles of establishing biosimilarity are to demonstrate structural and functional similarity to a reference product using the most discriminatory analytical methods. There is still considerable controversy on the scientific basis for extrapolation of indications for biosimilars, which has been strengthened by diverging global regulatory decision making. Closely related to the question of extrapolation is the question of how to communicate the evidence base for authorizing biosimilars to healthcare professionals. In this paper we will consider some of the discussions around extrapolation of indications and the implications of decisions of various regulatory agencies in the world regarding the authorization and labeling of biosimilars. PMID:26758077

  13. Concerning the extrapolation of solar nonlinear force-free magnetic fields

    NASA Technical Reports Server (NTRS)

    Gary, G. Allen

    1990-01-01

    This paper contains a review and discussion of the mathematical basis of the extrapolation techniques involved in using photospheric vector magnetograms to obtain the coronal field above the surface. The two basic techniques employing the Cauchy initial value problem and the variational techniques are reviewed in terms of the mathematical and practical applications. A short review is presented of the current research on numerical modeling techniques in the area of extrapolating vector magnetograms; specifically, algorithms to extrapolate nonlinear force-free magnetic fields from the photosphere are considered.

  14. Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements

    NASA Technical Reports Server (NTRS)

    Shepperd, S. W.; Robertson, W. M.

    1973-01-01

    The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.

  15. Neandertal clavicle length

    PubMed Central

    Trinkaus, Erik; Holliday, Trenton W.; Auerbach, Benjamin M.

    2014-01-01

    The Late Pleistocene archaic humans from western Eurasia (the Neandertals) have been described for a century as exhibiting absolutely and relatively long clavicles. This aspect of their body proportions has been used to distinguish them from modern humans, invoked to account for other aspects of their anatomy and genetics, used in assessments of their phylogenetic polarities, and used as evidence for Late Pleistocene population relationships. However, it has been unclear whether the usual scaling of Neandertal clavicular lengths to their associated humeral lengths reflects long clavicles, short humeri, or both. Neandertal clavicle lengths, along with those of early modern humans and latitudinally diverse recent humans, were compared with both humeral lengths and estimated body masses (based on femoral head diameters). The Neandertal do have long clavicles relative their humeri, even though they fall within the ranges of variation of early and recent humans. However, when scaled to body masses, their humeral lengths are relatively short, and their clavicular lengths are indistinguishable from those of Late Pleistocene and recent modern humans. The few sufficiently complete Early Pleistocene Homo clavicles seem to have relative lengths also well within recent human variation. Therefore, appropriately scaled clavicular length seems to have varied little through the genus Homo, and it should not be used to account for other aspects of Neandertal biology or their phylogenetic status. PMID:24616525

  16. Dose-response relationships and extrapolation in toxicology - Mechanistic and statistical considerations

    EPA Science Inventory

    Controversy on toxicological dose-response relationships and low-dose extrapolation of respective risks is often the consequence of misleading data presentation, lack of differentiation between types of response variables, and diverging mechanistic interpretation. In this chapter...

  17. [Effects of spatial heterogeneity on spatial extrapolation of sampling plot data].

    PubMed

    Liang, Yu; He, Hong-Shi; Hu, Yuan-Man; Bu, Ren-Cang

    2012-01-01

    By using model combination method, this paper simulated the changes of response variable (tree species distribution area at landscape level under climate change) under three scenarios of environmental spatial heterogeneous level, analyzed the differentiation of simulated results under different scenarios, and discussed the effects of environmental spatial heterogeneity on the larger spatial extrapolation of the tree species responses to climate change observed in sampling plots. For most tree species, spatial heterogeneity had little effects on the extrapolation from plot scale to class scale; for the tree species insensitive to climate warming and the azonal species, spatial heterogeneity also had little effects on the extrapolation from plot-scale to zonal scale. By contrast, for the tree species sensitive to climate warming, spatial heterogeneity had effects on the extrapolation from plot scale to zonal scale, and the effects could be varied under different scenarios.

  18. Melting of “non-magic” argon clusters and extrapolation to the bulk limit

    SciTech Connect

    Senn, Florian Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke

    2014-01-28

    The melting of argon clusters Ar{sub N} is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.

  19. Melting of "non-magic" argon clusters and extrapolation to the bulk limit.

    PubMed

    Senn, Florian; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke

    2014-01-28

    The melting of argon clusters ArN is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, "Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations," Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes. PMID:25669541

  20. Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?

    SciTech Connect

    Feng, Y. Lin, S.; Huang, S.; Shrestha, S.; Conibeer, G.

    2015-03-28

    Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation gives a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.

  1. Application of a framework for extrapolating chemical effects across species in pathways controlled by estrogen receptor-á

    EPA Science Inventory

    Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...

  2. Spurious long-range entanglement and replica correlation length

    NASA Astrophysics Data System (ADS)

    Zou, Liujun; Haah, Jeongwan

    2016-08-01

    Topological entanglement entropy has been regarded as a smoking-gun signature of topological order in two dimensions, capturing the total quantum dimension of the topological particle content. An extrapolation method on cylinders has been used frequently to measure the topological entanglement entropy. Here, we show that a class of short-range entangled 2D states, when put on an infinite cylinder of circumference L , exhibits the entanglement Rényi entropy of any integer index α ≥2 that obeys Sα=a L -γ , where a ,γ >0 . Under the extrapolation method, the subleading term γ would be identified as the topological entanglement entropy, which is spurious. A nonzero γ is always present if the 2D state reduces to a certain symmetry-protected topological 1D state, upon disentangling spins that are far from the entanglement cut. The internal symmetry that stabilizes γ >0 is not necessarily a symmetry of the 2D state, but should be present after the disentangling reduction. If the symmetry is absent, γ decays exponentially in L with a characteristic length, termed as a replica correlation length, which can be arbitrarily large compared to the two-point correlation length of the 2D state. We propose a simple numerical procedure to measure the replica correlation length through replica correlation functions. We also calculate the replica correlation functions for representative wave functions of Abelian discrete gauge theories and the double semion theory in 2D, to show that they decay abruptly to zero. This supports a conjecture that the replica correlation length being small implies that the subleading term from the extrapolation method determines the total quantum dimension.

  3. In situ LTE exposure of the general public: Characterization and extrapolation.

    PubMed

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields.

  4. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost. PMID:27305677

  5. Myofilament length dependent activation

    SciTech Connect

    de Tombe, Pieter P.; Mateja, Ryan D.; Tachampa, Kittipong; Mou, Younss Ait; Farman, Gerrie P.; Irving, Thomas C.

    2010-05-25

    The Frank-Starling law of the heart describes the interrelationship between end-diastolic volume and cardiac ejection volume, a regulatory system that operates on a beat-to-beat basis. The main cellular mechanism that underlies this phenomenon is an increase in the responsiveness of cardiac myofilaments to activating Ca{sup 2+} ions at a longer sarcomere length, commonly referred to as myofilament length-dependent activation. This review focuses on what molecular mechanisms may underlie myofilament length dependency. Specifically, the roles of inter-filament spacing, thick and thin filament based regulation, as well as sarcomeric regulatory proteins are discussed. Although the 'Frank-Starling law of the heart' constitutes a fundamental cardiac property that has been appreciated for well over a century, it is still not known in muscle how the contractile apparatus transduces the information concerning sarcomere length to modulate ventricular pressure development.

  6. Length Paradox in Relativity

    ERIC Educational Resources Information Center

    Martins, Roberto de A.

    1978-01-01

    Describes a thought experiment using a general analysis approach with Lorentz transformations to show that the apparent self-contradictions of special relativity concerning the length-paradox are really non-existant. (GA)

  7. Editorial: Redefining Length

    SciTech Connect

    Sprouse, Gene D.

    2011-07-15

    Technological changes have moved publishing to electronic-first publication where the print version has been relegated to simply another display mode. Distribution in HTML and EPUB formats, for example, changes the reading environment and reduces the need for strict pagination. Therefore, in an effort to streamline the calculation of length, the APS journals will no longer use the printed page as the determining factor for length. Instead the journals will now use word counts (or word equivalents for tables, figures, and equations) to establish length; for details please see http://publish.aps.org/authors/length-guide. The title, byline, abstract, acknowledgment, and references will not be included in these counts allowing authors the freedom to appropriately credit coworkers, funding sources, and the previous literature, bringing all relevant references to the attention of readers. This new method for determining length will be easier for authors to calculate in advance, and lead to fewer length-associated revisions in proof, yet still retain the quality of concise communication that is a virtue of short papers.

  8. Equilibrium CO bond lengths

    NASA Astrophysics Data System (ADS)

    Demaison, Jean; Császár, Attila G.

    2012-09-01

    Based on a sample of 38 molecules, 47 accurate equilibrium CO bond lengths have been collected and analyzed. These ultimate experimental (reEX), semiexperimental (reSE), and Born-Oppenheimer (reBO) equilibrium structures are compared to reBO estimates from two lower-level techniques of electronic structure theory, MP2(FC)/cc-pVQZ and B3LYP/6-311+G(3df,2pd). A linear relationship is found between the best equilibrium bond lengths and their MP2 or B3LYP estimates. These (and similar) linear relationships permit to estimate the CO bond length with an accuracy of 0.002 Å within the full range of 1.10-1.43 Å, corresponding to single, double, and triple CO bonds, for a large number of molecules. The variation of the CO bond length is qualitatively explained using the Atoms in Molecules method. In particular, a nice correlation is found between the CO bond length and the bond critical point density and it appears that the CO bond is at the same time covalent and ionic. Conditions which permit the computation of an accurate ab initio Born-Oppenheimer equilibrium structure are discussed. In particular, the core-core and core-valence correlation is investigated and it is shown to roughly increase with the bond length.

  9. Area, length and thickness conservation: Dogma or reality?

    NASA Astrophysics Data System (ADS)

    Moretti, Isabelle; Callot, Jean Paul

    2012-08-01

    The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.

  10. Extrapolation of Calibration Curve of Hot-wire Spirometer Using a Novel Neural Network Based Approach.

    PubMed

    Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad

    2012-10-01

    Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.

  11. Nonlinear force-free extrapolation of the coronal magnetic field based on the magnetohydrodynamic relaxation method

    SciTech Connect

    Inoue, S.; Magara, T.; Choe, G. S.; Kim, K. S.; Pandey, V. S.; Shiota, D.; Kusano, K.

    2014-01-01

    We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ∇ · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.

  12. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    NASA Astrophysics Data System (ADS)

    Mueller, David S.

    2013-04-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.

  13. The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James

    2007-01-01

    The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.

  14. Forced Field Extrapolation of the Magnetic Structure of the Hα fibrils in the Solar Chromosphere

    NASA Astrophysics Data System (ADS)

    Xiaoshuai, Zhu; Huaning, Wang; Zhanle, Du; Han, He

    2016-07-01

    We present a careful assessment of forced field extrapolation using the Solar Dynamics Observatory/Helioseismic and Magnetic Imager magnetogram. We use several metrics to check the convergence property. The extrapolated field lines below 3600 km appear to be aligned with most of the Hα fibrils observed by the New Vacuum Solar Telescope. In the region where magnetic energy is far larger than potential energy, the field lines computed by forced field extrapolation are still consistent with the patterns of Hα fibrils while the nonlinear force-free field results show a large misalignment. The horizontal average of the lorentz force ratio shows that the forced region where the force-free assumption fails can reach heights of 1400-1800 km. The non-force-free state of the chromosphere is also confirmed based on recent radiation magnetohydrodynamics simulations.

  15. Forced Field Extrapolation of the Magnetic Structure of the Hα fibrils in the Solar Chromosphere

    NASA Astrophysics Data System (ADS)

    Xiaoshuai, Zhu; Huaning, Wang; Zhanle, Du; Han, He

    2016-07-01

    We present a careful assessment of forced field extrapolation using the Solar Dynamics Observatory/Helioseismic and Magnetic Imager magnetogram. We use several metrics to check the convergence property. The extrapolated field lines below 3600 km appear to be aligned with most of the Hα fibrils observed by the New Vacuum Solar Telescope. In the region where magnetic energy is far larger than potential energy, the field lines computed by forced field extrapolation are still consistent with the patterns of Hα fibrils while the nonlinear force-free field results show a large misalignment. The horizontal average of the lorentz force ratio shows that the forced region where the force-free assumption fails can reach heights of 1400–1800 km. The non-force-free state of the chromosphere is also confirmed based on recent radiation magnetohydrodynamics simulations.

  16. Extrapolation uncertainties in the importance-truncated no-core shell model

    NASA Astrophysics Data System (ADS)

    Kruse, M. K. G.; Jurgenson, E. D.; Navrátil, P.; Barrett, B. R.; Ormand, W. E.

    2013-04-01

    Background: The importance-truncated no-core shell model (IT-NCSM) has recently been shown to extend theoretical nuclear structure calculations of p-shell nuclei to larger model (Nmax) spaces. The importance truncation procedure selects only relatively few of the many basis states present in a “large” Nmax basis space, thus making the calculation tractable and reasonably quick to perform. Initial results indicate that the procedure agrees well with the NCSM, in which a complete basis is constructed for a given Nmax.Purpose: An analysis of uncertainties in IT-NCSM such as those generated from the extrapolations to the complete Nmax space have not been fully discussed. We present a method for estimating the uncertainty when extrapolating to the complete Nmax space and demonstrate the method by comparing extrapolated IT-NCSM to full NCSM calculations up to Nmax=14. Furthermore, we study the result of extrapolating IT-NCSM ground-state energies to Nmax=∞ and compare the results to similarly extrapolated NCSM calculations. A procedure is formulated to assign uncertainties for Nmax=∞ extrapolations.Method: We report on 6Li calculations performed with the IT-NCSM and compare them to full NCSM calculations. We employ the Entem and Machleidt chiral two-body next-to-next-to-next leading order (N3LO) interaction (regulated at 500 MeV/c), which has been modified to a phase-shift equivalent potential by the similarity renormalization group (SRG) procedure. We investigate the dependence of the procedure on the technique employed to extrapolate to the complete Nmax space, the harmonic oscillator energy (ℏΩ), and investigate the dependence on the momentum-decoupling scale (λ) used in the SRG. We also investigate the use of one or several reference states from which the truncated basis is constructed.Results: We find that the uncertainties generated from various extrapolating functions used to extrapolate to the complete Nmax space increase as Nmax increases. The

  17. Simple extrapolation method to predict the electronic structure of conjugated polymers from calculations on oligomers

    DOE PAGES

    Larsen, Ross E.

    2016-04-12

    In this study, we introduce two simple tight-binding models, which we call fragment frontier orbital extrapolations (FFOE), to extrapolate important electronic properties to the polymer limit using electronic structure calculations on only a few small oligomers. In particular, we demonstrate by comparison to explicit density functional theory calculations that for long oligomers the energies of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and of the first electronic excited state are accurately described as a function of number of repeat units by a simple effective Hamiltonian parameterized from electronic structure calculations on monomers, dimers and, optionally,more » tetramers. For the alternating copolymer materials that currently comprise some of the most efficient polymer organic photovoltaic devices one can use these simple but rigorous models to extrapolate computed properties to the polymer limit based on calculations on a small number of low-molecular-weight oligomers.« less

  18. Mappability and read length

    PubMed Central

    Li, Wentian; Freudenberg, Jan

    2014-01-01

    Power-law distributions are the main functional form for the distribution of repeat size and repeat copy number in the human genome. When the genome is broken into fragments for sequencing, the limited size of fragments and reads may prevent an unique alignment of repeat sequences to the reference sequence. Repeats in the human genome can be as long as 104 bases, or 105 − 106 bases when allowing for mismatches between repeat units. Sequence reads from these regions are therefore unmappable when the read length is in the range of 103 bases. With a read length of 1000 bases, slightly more than 1% of the assembled genome, and slightly less than 1% of the 1 kb reads, are unmappable, excluding the unassembled portion of the human genome (8% in GRCh37/hg19). The slow decay (long tail) of the power-law function implies a diminishing return in converting unmappable regions/reads to become mappable with the increase of the read length, with the understanding that increasing read length will always move toward the direction of 100% mappability. PMID:25426137

  19. Extra- and intracellular volume monitoring by impedance during haemodialysis using Cole-Cole extrapolation.

    PubMed

    Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B

    1997-05-01

    A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.

  20. Extrapolation method in the Monte Carlo Shell Model and its applications

    SciTech Connect

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-05-06

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking {sup 56}Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as {sup 72}Ge with f5pg9-shell. The structure of {sup 72}Se is also studied including the discussion of the shape-coexistence phenomenon.

  1. Vowel length in Farsi

    NASA Astrophysics Data System (ADS)

    Shademan, Shabnam

    2001-05-01

    This study tests whether Farsi vowels are contrastive with respective to length. Farsi has a six-vowel system with three lax vowels and three tense vowels. Both traditional grammarians and modern linguists believe that Farsi tense vowels are longer than lax vowels, and that there are no vowel pairs that contrast only in length. However, it has been suggested that Farsi exhibits compensatory lengthening, which is triggered by the deletion of glottal consonants in coda position in informal speech (Darzi, 1991). As a result, minimal pairs such as [tar] and [tarh] should contrast only with respect to vowel length. A corpus of 90 words of the form CVC, CVCG, CVGC, and CVCC (where V=a vowel and G=a glottal consonant) was recorded, and durations of vowels in different contexts were measured and compared. Preliminary results show that lax vowel durations fall into three groups with CVCC longer than CVCG/CVGC, and the latter longer than CVC. It remains to be seen whether CVCG/CVGC words show compensatory lengthening when the glottal consonant is deleted.

  2. The K+ K+ scattering length from Lattice QCD

    SciTech Connect

    Silas Beane; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud

    2007-09-11

    The K+K+ scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the MILC asqtad-improved gauge configurations with fourth-rooted staggered sea quarks. Three-flavor mixed-action chiral perturbation theory at next-to-leading order, which includes the leading effects of the finite lattice spacing, is used to extrapolate the results of the lattice calculation to the physical value of mK + /fK + . We find mK^+ aK^+ K^+ = â~0.352 ± 0.016, where the statistical and systematic errors have been combined in quadrature.

  3. Improvement of the Quality of Reconstructed Holographic Images by Extrapolation of Digital Holograms

    NASA Astrophysics Data System (ADS)

    Dyomin, V. V.; Olshukov, A. S.

    2016-02-01

    The work is devoted to investigation of noise in reconstructed holographic images in the form of a system of fringes parallel to the hologram frame boundaries. Mathematical and physical interpretation is proposed together with an algorithm for reduction of this effect by extrapolation of digital holograms using bicubic splines. The efficiency of the algorithm is estimated and examples of its application are presented.

  4. EVALUATION OF MINIMUM DATA REQUIREMENTS FOR ACUTE TOXICITY VALUE EXTRAPOLATION WITH AQUATIC ORGANISMS

    EPA Science Inventory

    Buckler, Denny R., Foster L. Mayer, Mark R. Ellersieck and Amha Asfaw. 2003. Evaluation of Minimum Data Requirements for Acute Toxicity Value Extrapolation with Aquatic Organisms. EPA/600/R-03/104. U.S. Environmental Protection Agency, National Health and Environmental Effects Re...

  5. A NEW CODE FOR NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF THE GLOBAL CORONA

    SciTech Connect

    Jiang Chaowei; Feng Xueshang; Xiang Changqing

    2012-08-10

    Reliable measurements of the solar magnetic field are still restricted to the photosphere, and our present knowledge of the three-dimensional coronal magnetic field is largely based on extrapolations from photospheric magnetograms using physical models, e.g., the nonlinear force-free field (NLFFF) model that is usually adopted. Most of the currently available NLFFF codes have been developed with computational volume such as a Cartesian box or a spherical wedge, while a global full-sphere extrapolation is still under development. A high-performance global extrapolation code is in particular urgently needed considering that the Solar Dynamics Observatory can provide a full-disk magnetogram with resolution up to 4096 Multiplication-Sign 4096. In this work, we present a new parallelized code for global NLFFF extrapolation with the photosphere magnetogram as input. The method is based on the magnetohydrodynamics relaxation approach, the CESE-MHD numerical scheme, and a Yin-Yang spherical grid that is used to overcome the polar problems of the standard spherical grid. The code is validated by two full-sphere force-free solutions from Low and Lou's semi-analytic force-free field model. The code shows high accuracy and fast convergence, and can be ready for future practical application if combined with an adaptive mesh refinement technique.

  6. Route-to-route extrapolation of the toxic potency of MTBE.

    PubMed

    Dourson, M L; Felter, S P

    1997-12-01

    MTBE is a volatile organic compound used as an oxygenating agent in gasoline. Inhalation from fumes while refueling automobiles is the principle route of exposure for humans, and toxicity by this route has been well studied. Oral exposures to MTBE exist as well, primarily due to groundwater contamination from leaking stationary sources, such as underground storage tanks. Assessing the potential public health impacts of oral exposures to MTBE is problematic because drinking water studies do not exist for MTBE, and the few oil-gavage studies from which a risk assessment could be derived are limited. This paper evaluates the suitability of the MTBE database for conducting an inhalation route-to-oral route extrapolation of toxicity. This includes evaluating the similarity of critical effect between these two routes, quantifiable differences in absorption, distribution, metabolism, and excretion, and sufficiency of toxicity data by the inhalation route. We conclude that such an extrapolation is appropriate and have validated the extrapolation by finding comparable toxicity between a subchronic gavage oral bioassay and oral doses we extrapolate from a subchronic inhalation bioassay. Our results are extended to the 2-year inhalation toxicity study by Chun et al. (1992) in which rats were exposed to 0, 400, 3000, or 8000 ppm MTBE for 6 hr/d, 5 d/wk. We have estimated the equivalent oral doses to be 0, 130, 940, or 2700 mg/kg/d. These equivalent doses may be useful in conducting noncancer and cancer risk assessments.

  7. Daily evapotranspiration estimates from extrapolating instantaneous airborne remote sensing ET values

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, six extrapolation methods have been compared for their ability to estimate daily crop evapotranspiration (ETd) from instantaneous latent heat flux estimates derived from digital airborne multispectral remote sensing imagery. Data used in this study were collected during an experiment...

  8. Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)

    1998-01-01

    The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.

  9. Extrapolating toxic effects on individuals to the population level: the role of dynamic energy budgets.

    PubMed

    Jager, Tjalling; Klok, Chris

    2010-11-12

    The interest of environmental management is in the long-term health of populations and ecosystems. However, toxicity is usually assessed in short-term experiments with individuals. Modelling based on dynamic energy budget (DEB) theory aids the extraction of mechanistic information from the data, which in turn supports educated extrapolation to the population level. To illustrate the use of DEB models in this extrapolation, we analyse a dataset for life cycle toxicity of copper in the earthworm Dendrobaena octaedra. We compare four approaches for the analysis of the toxicity data: no model, a simple DEB model without reserves and maturation (the Kooijman-Metz formulation), a more complex one with static reserves and simplified maturation (as used in the DEBtox software) and a full-scale DEB model (DEB3) with explicit calculation of reserves and maturation. For the population prediction, we compare two simple demographic approaches (discrete time matrix model and continuous time Euler-Lotka equation). In our case, the difference between DEB approaches and population models turned out to be small. However, differences between DEB models increased when extrapolating to more field-relevant conditions. The DEB3 model allows for a completely consistent assessment of toxic effects and therefore greater confidence in extrapolating, but poses greater demands on the available data. PMID:20921051

  10. EXTRAPOLATION IN HUMAN HEALTH AND ECOLOGICAL RISK ASSESSMENTS: PROCEEDINGS OF A SYMPOSIUM

    EPA Science Inventory

    A symposium was conducted in April 1998 by the U.S. Environmental Protection Agency's National Health and Environmental Effects Research Laboratory (NHEERL) to explore issues of extrapolation in human health and ecological risk assessments. Over the course of three and one half d...

  11. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide

    PubMed Central

    Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian

    2014-01-01

    Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external

  12. Specific surface area determinations on intact drillcores and evaluation of extrapolation methods for rock matrix surfaces.

    PubMed

    André, M; Malmström, M E; Neretnieks, I

    2009-11-01

    Permanent storage of spent nuclear fuel in crystalline bedrock is investigated in several countries. For this storage scenario, the host rock is the third and final barrier for radionuclide migration. Sorption reactions in the crystalline rock matrix have strong retardative effects on the transport of radionuclides. To assess the barrier properties of the host rock it is important to have sorption data representative of the undisturbed host rock conditions. Sorption data is in the majority of reported cases determined using crushed rock. Crushing has been shown to increase a rock samples sorption capacity by creating additional surfaces. There are several problems with such an extrapolation. In studies where this problem is addressed, simple models relating the specific surface area to the particle size are used to extrapolate experimental data to a value representative of the host rock conditions. In this article, we report and compare surface area data of five size fractions of crushed granite and of 100 mm long drillcores as determined by the Brunauer Emmet Teller (BET)-method using N(2)-gas. Special sample holders that could hold large specimen were developed for the BET measurements. Surface area data on rock samples as large as the drillcore has not previously been published. An analysis of this data show that the extrapolated value for intact rock obtained from measurements on crushed material was larger than the determined specific surface area of the drillcores, in some cases with more than 1000%. Our results show that the use of data from crushed material and current models to extrapolate specific surface areas for host rock conditions can lead to over estimation interpretations of sorption ability. The shortcomings of the extrapolation model are discussed and possible explanations for the deviation from experimental data are proposed.

  13. To scale or not to scale: the principles of dose extrapolation

    PubMed Central

    Sharma, Vijay; McNeill, John H

    2009-01-01

    The principles of inter-species dose extrapolation are poorly understood and applied. We provide an overview of the principles underlying dose scaling for size and dose adjustment for size-independent differences. Scaling of a dose is required in three main situations: the anticipation of first-in-human doses for clinical trials, dose extrapolation in veterinary practice and dose extrapolation for experimental purposes. Each of these situations is discussed. Allometric scaling of drug doses is commonly used for practical reasons, but can be more accurate when one takes into account species differences in pharmacokinetic parameters (clearance, volume of distribution). Simple scaling of drug doses can be misleading for some drugs; correction for protein binding, physicochemical properties of the drug or species differences in physiological time can improve scaling. However, differences in drug transport and metabolism, and in the dose–response relationship, can override the effect of size alone. For this reason, a range of modelling approaches have been developed, which combine in silico simulations with data obtained in vitro and/or in vivo. Drugs that are unlikely to be amenable to simple allometric scaling of their clearance or dose include drugs that are highly protein-bound, drugs that undergo extensive metabolism and active transport, drugs that undergo significant biliary excretion (MW > 500, ampiphilic, conjugated), drugs whose targets are subject to inter-species differences in expression, affinity and distribution and drugs that undergo extensive renal secretion. In addition to inter-species dose extrapolation, we provide an overview of dose extrapolation within species, discussing drug dosing in paediatrics and in the elderly. PMID:19508398

  14. Length of stain dosimeter

    NASA Technical Reports Server (NTRS)

    Lueck, Dale E. (Inventor)

    1994-01-01

    Payload customers for the Space Shuttle have recently expressed concerns about the possibility of their payloads at an adjacent pad being contaminated by plume effluents from a shuttle at an active pad as they await launch on an inactive pad. As part of a study to satisfy such concerns a ring of inexpensive dosimeters was deployed around the active pad at the inter-pad distance. However, following a launch, dosimeters cannot be read for several hours after the exposure. As a consequence factors such as different substrates, solvent systems, and possible volatilization of HCl from the badges were studied. This observation led to the length of stain (LOS) dosimeters of this invention. Commercial passive LOS dosimeters are sensitive only to the extent of being capable of sensing 2 ppm to 20 ppm if the exposure is 8 hours. To map and quantitate the HCl generated by Shuttle launches, and in the atmosphere within a radius of 1.5 miles from the active pad, a sensitivity of 2 ppm HCl in the atmospheric gases on an exposure of 5 minutes is required. A passive length of stain dosimeter has been developed having a sensitivity rendering it capable of detecting a gas in a concentration as low as 2 ppm on an exposure of five minutes.

  15. A one-term extrapolation method for estimating equilibrium constants of aqueous reactions at elevated temperatures

    NASA Astrophysics Data System (ADS)

    Gu, Y.; Gammons, C. H.; Bloom, M. S.

    1994-09-01

    A one-term method for extrapolating equilibrium constants for aqueous reactions is proposed which is based on the observation that the change in free energy of a well-balanced isocoulombic reaction is nearly independent of temperature. The current practice in extrapolating log K values for isocoulombic reactions is to omit the ΔCp term but include a ΔS term (i.e., the two-term extrapolation equation of LINDSAY, 1980). However, we observe that the ΔCp and ΔS terms for many isocoulombic reactions are not only small, but are often opposite in sign, and therefore tend to cancel one another. Thus, inclusion of an entropy term often yields estimates which are less accurate than omission of both terms. The one-term extrapolation technique is tested with literature data for a large number of isocoulombic reactions involving ion-ligand exchange, cation hydrolysis, acid-base neutralization, redox, and selected reactions involving solids. In most cases the extrapolated values are in excellent agreement with the experimental measurements, especially at higher temperatures where they are often more accurate than those obtained using the two-term equation of LINDSAY (1980). The results are also comparable to estimates obtained using the modified HKF model of TANGER and HELGESON (1988) and the density model of ANDERSON et al. (1991). It is also found to produce reasonable estimates for isocoulombic reactions at elevated pressure (up to P = 2 kb) and ionic strength (up to I = 1.0). The principal advantage of the one-term method is that accurate estimates of high temperature equilibrium constants may be obtained using only free energy data for the reaction of interest at one reference temperature. The principal disadvantage is that the accuracies of the estimates are somewhat dependent on the model reaction selected to balance the isocoulombic reaction. Satisfactory results are obtained for reactions that have minimal energetic, electrostatic, structural, and volumetric

  16. Improving In Vitro to In Vivo Extrapolation by Incorporating Toxicokinetic Measurements: A Case Study of Lindane-Induced Neurotoxicity

    EPA Science Inventory

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...

  17. Extrapolation methods for obtaining low-lying eigenvalues of a large-dimensional shell model Hamiltonian matrix

    SciTech Connect

    Yoshinaga, N.; Arima, A.

    2010-04-15

    We propose some new, efficient, and practical extrapolation methods to obtain a few low-lying eigenenergies of a large-dimensional Hamiltonian matrix in the nuclear shell model. We obtain those energies at the desired accuracy by extrapolation after diagonalizing small-dimensional submatrices of the sorted Hamiltonian matrix.

  18. New method of extrapolation of the resistance of a model planing boat to full size

    NASA Technical Reports Server (NTRS)

    Sottorf, W

    1942-01-01

    The previously employed method of extrapolating the total resistance to full size with lambda(exp 3) (model scale) and thereby foregoing a separate appraisal of the frictional resistance, was permissible for large models and floats of normal size. But faced with the ever increasing size of aircraft a reexamination of the problem of extrapolation to full size is called for. A method is described by means of which, on the basis of an analysis of tests on planing surfaces, the variation of the wetted surface over the take-off range is analytically obtained. The friction coefficients are read from Prandtl's curve for turbulent boundary layer with laminar approach. With these two values a correction for friction is obtainable.

  19. The global extrapolation of numerical methods for computing concentration profiles in percutaneous drug absorption.

    PubMed

    Twizell, E H

    1989-01-01

    A family of numerical methods is developed and analyzed for the numerical solution of the parabolic partial differential equation together with the associated initial and boundary conditions, which arise in a mathematical model of the transient stage of percutaneous drug absorption. Two global extrapolation procedures are described, the first in time only, the second in both space and time, for improving the accuracy of the computed concentration profiles. The behaviours of two members of the family of methods, before and after extrapolation, are examined by repeating a number of experiments reported in the literature. Modifications to the algorithms, which are necessary in computing concentration profiles after the ointment is removed at the steady state, are outlined.

  20. Electric form factors of the octet baryons from lattice QCD and chiral extrapolation

    NASA Astrophysics Data System (ADS)

    Shanahan, P. E.; Horsley, R.; Nakamura, Y.; Pleiter, D.; Rakow, P. E. L.; Schierholz, G.; Stüben, H.; Thomas, A. W.; Young, R. D.; Zanotti, J. M.; Cssm; Qcdsf/Ukqcd Collaborations

    2014-08-01

    We apply a formalism inspired by heavy-baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q2 in the range 0.2-1.3 GeV2. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small relative to the uncertainties of the calculation. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio μpGEp/GMp. This quantity decreases with Q2 in a way qualitatively consistent with recent experimental results.

  1. Understanding the biosimilar approval and extrapolation process-A case study of an epoetin biosimilar.

    PubMed

    Agarwal, Amit B; McBride, Ali

    2016-08-01

    The World Health Organization defines a biosimilar as "a biotherapeutic product which is similar in terms of quality, safety and efficacy to an already licensed reference biotherapeutic product." Biosimilars are biologic medical products that are very distinct from small-molecule generics, as their active substance is a biological agent derived from a living organism. Approval processes are highly regulated, with guidance issued by the European Medicines Agency and US Food and Drug Administration. Approval requires a comparability exercise consisting of extensive analytical and preclinical in vitro and in vivo studies, and confirmatory clinical studies. Extrapolation of biosimilars from their original indication to another is a feasible but highly stringent process reliant on rigorous scientific justification. This review focuses on the processes involved in gaining biosimilar approval and extrapolation and details the comparability exercise undertaken in the European Union between originator erythropoietin-stimulating agent, Eprex(®), and biosimilar, Retacrit™. PMID:27317353

  2. New allometric scaling relationships and applications for dose and toxicity extrapolation.

    PubMed

    Cao, Qiming; Yu, Jimmy; Connell, Des

    2014-01-01

    Allometric scaling between metabolic rate, size, body temperature, and other biological traits has found broad applications in ecology, physiology, and particularly in toxicology and pharmacology. Basal metabolic rate (BMR) was observed to scale with body size and temperature. However, the mass scaling exponent was increasingly debated whether it should be 2/3, 3/4, or neither, and scaling with body temperature also attracted recent attention. Based on thermodynamic principles, this work reports 2 new scaling relationships between BMR, size, temperature, and biological time. Good correlations were found with the new scaling relationships, and no universal scaling exponent can be obtained. The new scaling relationships were successfully validated with external toxicological and pharmacological studies. Results also demonstrated that individual extrapolation models can be built to obtain scaling exponent specific to the interested group, which can be practically applied for dose and toxicity extrapolations.

  3. A Spatial Extrapolation Approach to Assess the Impact of Climate Change on Water Resource Systems

    NASA Astrophysics Data System (ADS)

    Pina, J.; Tilmant, A.; Anctil, F.

    2015-12-01

    The typical approach to assess climate change impacts on water resources systems is based on a vertical integration/coupling of models: GCM models are run to project future precipitations and temperatures, which are then downscaled and used as inputs to hydrologic models whose outputs are processed by water systems models. From a decision-making point of view, this top-down vertical approach presents some challenges. For example, since the range of uncertainty that can be explored with GCM is limited, researchers are relying on ensembles to enlarge the spread, making the modeling approach even more demanding in terms of computation time and resource. When a particular water system must be analyzed, the question is to know whether this computationally intensive vertical approach is necessary in the first place or if we could extrapolate projections available in neighboring systems to feed the water system model? This would be equivalent to a horizontal approach. The proposed study addresses this question by comparing the performance of a water resource system under future climate conditions using the vertical and horizontal approaches. The methodology is illustrated with the hydropower system of the Gatineau River Basin in Quebec, Canada. Vertically obtained hydrologic projections available in those river basins are extrapolated and used as inputs to a stochastic multireservoir optimization model. Two different extrapolation techniques are tested. The first one simply relies on the ratios between the drainage areas. The second exploits the covariance structure found in historical flow data throughout the region. The analysis of the simulation results reveals that the annual and weekly energy productions of the system derived from the horizontal approach are statistically equivalent to those obtained with the vertical one, regardless of the extrapolation technique used.

  4. Short-range stabilizing potential for computing energies and lifetimes of temporary anions with extrapolation methods

    NASA Astrophysics Data System (ADS)

    Sommerfeld, Thomas; Ehara, Masahiro

    2015-01-01

    The energy of a temporary anion can be computed by adding a stabilizing potential to the molecular Hamiltonian, increasing the stabilization until the temporary state is turned into a bound state, and then further increasing the stabilization until enough bound state energies have been collected so that these can be extrapolated back to vanishing stabilization. The lifetime can be obtained from the same data, but only if the extrapolation is done through analytic continuation of the momentum as a function of the square root of a shifted stabilizing parameter. This method is known as analytic continuation of the coupling constant, and it requires—at least in principle—that the bound-state input data are computed with a short-range stabilizing potential. In the context of molecules and ab initio packages, long-range Coulomb stabilizing potentials are, however, far more convenient and have been used in the past with some success, although the error introduced by the long-rang nature of the stabilizing potential remains unknown. Here, we introduce a soft-Voronoi box potential that can serve as a short-range stabilizing potential. The difference between a Coulomb and the new stabilization is analyzed in detail for a one-dimensional model system as well as for the 2Πu resonance of CO 2- , and in both cases, the extrapolation results are compared to independently computed resonance parameters, from complex scaling for the model, and from complex absorbing potential calculations for CO 2- . It is important to emphasize that for both the model and for CO 2- , all three sets of results have, respectively, been obtained with the same electronic structure method and basis set so that the theoretical description of the continuum can be directly compared. The new soft-Voronoi-box-based extrapolation is then used to study the influence of the size of diffuse and the valence basis sets on the computed resonance parameters.

  5. Mechanical Component Lifetime Estimation Based on Accelerated Life Testing with Singularity Extrapolation

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Chuckpaiwong, I.; Liang, S. Y.; Seth, B. B.

    2002-07-01

    Life testing under nominal operating conditions of mechanical parts with high mean lifetime between failure (MTBF) often consumes a significant amount of time and resources, rendering such procedures expensive and impractical. As a result, the technology of accelerated life testing (ALT) has been developed for testing at high stress levels (e.g. temperature, voltage, pressure, corrosive media, load, vibration amplitude, etc.) so that it can be extrapolated—through a physically reasonable statistical model—to obtain estimations of life at lower, normal stress levels or even limit stress levels. However, the issue of prediction accuracy associated with extrapolating data outside the range of testing, or even to a singularity level (no stress), has not yet been fully addressed. In this research, an accelerator factor is introduced into an inverse power law model to estimate the life distribution in terms of time and stresses. Also, a generalized Eyring model is set up for singularity extrapolation in handling limit stress level conditions. The procedure to calibrate the associated shape factors based on the maximum likelihood principle is also formulated. The methodology implementation, based on a one-main-step, multiple-step-stress test scheme, is experimentally illustrated with tapered roller bearing under the stress of environmental corrosion as a case study. The experimental results show that the developed accelerated life test model can effectively evaluate the life probability of a bearing based on accelerated testing data when extrapolating to the stress levels within or outside the range of testing.

  6. Extrapolation from incomplete data to total or lifetime risks at low doses.

    PubMed Central

    Schneiderman, M A

    1981-01-01

    Both epidemiology and laboratory data can contribute to estimates of risks to humans of exposure to low doses of carcinogens. The sum of all these contributions does not permit us to make these estimates with certainty. In chronic disease epidemiology, in looking for possible excessive cancer risks, we sometimes fail to have an adequately long observation time or to observe a population sufficiently aged for cancers to appear in meaningful numbers. In studies of most human exposures, dose data are often lacking, beyond a vague "yes-no" or "lots, not much, hardly any." Thus, without a knowledge of what dose produced an observed result it becomes logically impossible to know what result some other (presumed) dose might yield. Animal data show some promise of being useful in extrapolating to low doses in man. However, several problems exist: (a) man is not a tailless, two-legged mouse, or featherless chicken--that is, we do not know if man is more or less sensitive than the laboratory animal; (b) the mathematical model used for extrapolation leads to large differences in estimates of response; (c) man is genetically heterogeneous and is usually exposed to many more hazards than is the laboratory animal. Thus, existing data, even from well-done studies, are inadequate if we want to make extrapolations in any detail or to apply to specific subgroups in the population. Any risk estimation we do may have to be stated in terms that point out the wide ranges of the estimates. PMID:7333258

  7. Extrapolation of bidirectional texture functions using texture synthesis guided by photometric normals

    NASA Astrophysics Data System (ADS)

    Steinhausen, Heinz C.; Martín, Rodrigo; den Brok, Dennis; Hullin, Matthias B.; Klein, Reinhard

    2015-03-01

    Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.

  8. A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system

    NASA Astrophysics Data System (ADS)

    Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson

    2016-06-01

    Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.

  9. Free magnetic energy and relative magnetic helicity diagnostics for the quality of NLFF field extrapolations

    NASA Astrophysics Data System (ADS)

    Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.

    We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.

  10. Testing magnetofrictional extrapolation with the Titov-Démoulin model of solar active regions

    NASA Astrophysics Data System (ADS)

    Valori, G.; Kliem, B.; Török, T.; Titov, V. S.

    2010-09-01

    We examine the nonlinear magnetofrictional extrapolation scheme using the solar active region model by Titov and Démoulin as test field. This model consists of an arched, line-tied current channel held in force-free equilibrium by the potential field of a bipolar flux distribution in the bottom boundary. A modified version with a parabolic current density profile is employed here. We find that the equilibrium is reconstructed with very high accuracy in a representative range of parameter space, using only the vector field in the bottom boundary as input. Structural features formed in the interface between the flux rope and the surrounding arcade - “hyperbolic flux tube” and “bald patch separatrix surface” - are reliably reproduced, as are the flux rope twist and the energy and helicity of the configuration. This demonstrates that force-free fields containing these basic structural elements of solar active regions can be obtained by extrapolation. The influence of the chosen initial condition on the accuracy of reconstruction is also addressed, confirming that the initial field that best matches the external potential field of the model quite naturally leads to the best reconstruction. Extrapolating the magnetogram of a Titov-Démoulin equilibrium in the unstable range of parameter space yields a sequence of two opposing evolutionary phases, which clearly indicate the unstable nature of the configuration: a partial buildup of the flux rope with rising free energy is followed by destruction of the rope, losing most of the free energy.

  11. Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field

    NASA Astrophysics Data System (ADS)

    Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.

    2011-03-01

    In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.

  12. Extrapolating a hierarchy of building block systems towards future neural network organisms.

    PubMed

    Jagers op Akkerhuis, G

    2001-01-01

    Is it possible to predict future life forms? In this paper it is argued that the answer to this question may well be positive. As a basis for predictions a rationale is used that is derived from historical data, e.g. from a hierarchical classification that ranks all building block systems, that have evolved so far. This classification is based on specific emergent properties that allow stepwise transitions, from low level building blocks to higher level ones. This paper shows how this hierarchy can be used for predicting future life forms. The extrapolations suggest several future neural network organisms. Major aspects of the structures of these organisms are predicted. The results can be considered of fundamental importance for several reasons. Firstly, assuming that the operator hierarchy is a proper basis for predictions, the result yields insight into the structure of future organisms. Secondly, the predictions are not extrapolations of presently observed trends, but are fully integrated with all historical system transitions in evolution. Thirdly, the extrapolations suggest the structures of intelligences that, one day, will possess more powerful brains than human beings. This study ends with a discussion of possibilities for falsification of the present theory, the implications of the present predictions in relation to recent developments in artificial intelligence and the philosophical implications of the role of humanity in evolution with regard to the creation of future neural network organisms.

  13. Comparison of methods for the detection and extrapolation of trends in groundwater quality.

    PubMed

    Visser, Ate; Dubus, Igor; Broers, Hans Peter; Brouyère, Serge; Korcz, Marek; Orban, Philippe; Goderniaux, Pascal; Batlle-Aguilar, Jordi; Surdyk, Nicolas; Amraoui, Nadia; Job, Hélène; Pinault, Jean Louis; Bierkens, Marc

    2009-11-01

    Land use changes and the intensification of agriculture since the 1950s have resulted in a deterioration of groundwater quality in many European countries. For the protection of groundwater quality, it is necessary to (1) assess the current groundwater quality status, (2) detect changes or trends in groundwater quality, (3) assess the threat of deterioration and (4) predict future changes in groundwater quality. A variety of approaches and tools can be used to detect and extrapolate trends in groundwater quality, ranging from simple linear statistics to distributed 3D groundwater contaminant transport models. In this paper we report on a comparison of four methods for the detection and extrapolation of trends in groundwater quality: (1) statistical methods, (2) groundwater dating, (3) transfer functions, and (4) deterministic modeling. Our work shows that the selection of the method should firstly be made on the basis of the specific goals of the study (only trend detection or also extrapolation), the system under study, and the available resources. For trend detection in groundwater quality in relation to diffuse agricultural contamination, a very important aspect is whether the nature of the monitoring network and groundwater body allows the collection of samples with a distinct age or produces samples with a mixture of young and old groundwater. We conclude that there is no single optimal method to detect trends in groundwater quality across widely differing catchments.

  14. Estimation of the extrapolation error in the calibration of type S thermocouples

    NASA Astrophysics Data System (ADS)

    Giorgio, P.; Garrity, K. M.; Rebagliati, M. Jiménez; García Skabar, J.

    2013-09-01

    Measurement results from the calibration performed at NIST of ten new type S thermocouples have been analyzed to estimate the extrapolation error. Thermocouples have been calibrated at the fixed points of Zn, Al, Ag and Au and calibration curves were calculated using different numbers of FPs. It was found for these thermocouples that the absolute value of the extrapolation error, evaluated by measurement at the Au freezing-point temperature, is at most 0.10 °C and 0.27 °C when the fixed-points of Zn, Al and Ag, or the fixed-points of Zn and Al, are respectively used to calculate the calibration curve. It is also shown that absolute value of the extrapolation error, evaluated by measurement at the Ag freezing-point temperature is at most 0.25 °C when the fixed-points of Zn and Al, are used to calculate the calibration curve. This study is oriented to help those labs that lack a direct mechanism to achieve a high temperature calibration. It supports, up to 1064 °C, the application of a similar procedure to that used by Burns and Scroger in NIST SP-250-35 for calibrating a new type S thermocouple. The uncertainty amounts a few tenths of a degree Celsius.

  15. A technique to improve the accuracy of Earth orientation prediction algorithms based on least squares extrapolation

    NASA Astrophysics Data System (ADS)

    Guo, J. Y.; Li, Y. B.; Dai, C. L.; Shum, C. K.

    2013-10-01

    We present a technique to improve the least squares (LS) extrapolation of Earth orientation parameters (EOPs), consisting of fixing the last observed data point on the LS extrapolation curve, which customarily includes a polynomial and a few sinusoids. For the polar motion (PM), a more sophisticated two steps approach has been developed, which consists of estimating the amplitude of the more stable one of the annual (AW) and Chandler (CW) wobbles using data of longer time span, and then estimating the other parameters using a shorter time span. The technique is studied using hindcast experiments, and justified using year-by-year statistics of 8 years. In order to compare with the official predictions of the International Earth Rotation and Reference Systems Service (IERS) performed at the U.S. Navy Observatory (USNO), we have enforced short-term predictions by applying the ARIMA method to the residuals computed by subtracting the LS extrapolation curve from the observation data. The same as at USNO, we have also used atmospheric excitation function (AEF) to further improve predictions of UT1-UTC. As results, our short-term predictions are comparable to the USNO predictions, and our long-term predictions are marginally better, although not for every year. In addition, we have tested the use of AEF and oceanic excitation function (OEF) in PM prediction. We find that use of forecasts of AEF alone does not lead to any apparent improvement or worsening, while use of forecasts of AEF + OEF does lead to apparent improvement.

  16. Further improvement of temporal resolution of seismic data by autoregressive (AR) spectral extrapolation

    NASA Astrophysics Data System (ADS)

    Karslı, Hakan

    2006-08-01

    Seismic data have still no enough temporal resolution because of band-limited nature of available data even if it is deconvolved. However, lower and higher frequency information belonging to seismic data is missing and it is not directly recovered from seismic data. In this paper, a method originally applied by Honarvar et al. [Honarvar, F., Sheikhzadeh, H., Moles, M., Sinclair, A.N., 2004. Improving the time-resolution and signal-noise ratio of ultrasonic NDE signals. Ultrasonics 41, 755-763.] which is the combination of the most widely used Wiener deconvolution and AR spectral extrapolation in frequency domain is briefly reviewed and is applied to seismic data to improve temporal resolution further. The missing frequency information is optimally recovered by forward and backward extrapolation based on the selection of a high signal-noise ratio (SNR) of signal spectrum deconvolved in signal processing technique. The combination of the two methods is firstly tested on a variety of synthetic examples and then applied to a stacked real trace. The selection of necessary parameters in Wiener filtering and in extrapolation are discussed in detail. It is used an optimum frequency windows between 3 and 10 dB drops by comparing results from these drops, while frequency windows are used as standard between 2.8 and 3.2 dB drops in study of Honarvar et al. [Honarvar, F., Sheikhzadeh, H., Moles, M., Sinclair, A.N., 2004. Improving the time-resolution and signal-noise ratio of ultrasonic NDE signals. Ultrasonics 41, 755-763.]. The results obtained show that the application of the purposed signal processing technique considerably improves temporal resolution of seismic data when compared with the original seismic data. Furthermore, AR based spectral extrapolated data can be almost considered as reflectivity sequence of layered medium. Consequently, the combination of Wiener deconvolution and AR spectral extrapolation can reveal some details of seismic data that cannot be

  17. Motion-based prediction explains the role of tracking in motion extrapolation.

    PubMed

    Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U

    2013-11-01

    During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated

  18. SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy

    SciTech Connect

    Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J

    2014-06-01

    Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.

  19. Codes with Monotonic Codeword Lengths.

    ERIC Educational Resources Information Center

    Abrahams, Julia

    1994-01-01

    Discusses the minimum average codeword length coding under the constraint that the codewords are monotonically nondecreasing in length. Bounds on the average length of an optimal monotonic code are derived, and sufficient conditions are given such that algorithms for optimal alphabetic codes can be used to find the optimal monotonic code. (six…

  20. Neural Extrapolation of Motion for a Ball Rolling Down an Inclined Plane

    PubMed Central

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion. PMID:24940874

  1. Neural extrapolation of motion for a ball rolling down an inclined plane.

    PubMed

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  2. Key to Opening Kidney for In Vitro-In Vivo Extrapolation Entrance in Health and Disease: Part II: Mechanistic Models and In Vitro-In Vivo Extrapolation.

    PubMed

    Scotcher, Daniel; Jones, Christopher; Posada, Maria; Galetin, Aleksandra; Rostami-Hodjegan, Amin

    2016-09-01

    It is envisaged that application of mechanistic models will improve prediction of changes in renal disposition due to drug-drug interactions, genetic polymorphism in enzymes and transporters and/or renal impairment. However, developing and validating mechanistic kidney models is challenging due to the number of processes that may occur (filtration, secretion, reabsorption and metabolism) in this complex organ. Prediction of human renal drug disposition from preclinical species may be hampered by species differences in the expression and activity of drug metabolising enzymes and transporters. A proposed solution is bottom-up prediction of pharmacokinetic parameters based on in vitro-in vivo extrapolation (IVIVE), mediated by recent advances in in vitro experimental techniques and application of relevant scaling factors. This review is a follow-up to the Part I of the report from the 2015 AAPS Annual Meeting and Exhibition (Orlando, FL; 25th-29th October 2015) which focuses on IVIVE and mechanistic prediction of renal drug disposition. It describes the various mechanistic kidney models that may be used to investigate renal drug disposition. Particular attention is given to efforts that have attempted to incorporate elements of IVIVE. In addition, the use of mechanistic models in prediction of renal drug-drug interactions and potential for application in determining suitable adjustment of dose in kidney disease are discussed. The need for suitable clinical pharmacokinetics data for the purposes of delineating mechanistic aspects of kidney models in various scenarios is highlighted. PMID:27506526

  3. Radioactive waste produced by DEMO and commercial fusion reactors extrapolated from ITER and advanced data bases

    SciTech Connect

    Stacey, W.M.; Hertel, N.E.; Hoffman, E.A.

    1994-07-01

    The radioactive wastes that would be produced in demonstration (DEMO) and commercial (CFR) fusion reactors which could be extrapolated from the design data base that will be provided by ITER and its supporting R&D and from a design data base supplemented by advanced physics and advanced materials R&D programs are identified and characterized in terms of a number of possible criteria for near-surface burial. The results indicate that there is a possibility that all fusion wastes could satisfy a ``low level`` waste criterion for ``near-surface`` burial.

  4. Extracting critical exponents for sequences of numerical data via series extrapolation techniques

    NASA Astrophysics Data System (ADS)

    Cöster, Kris; Schmidt, Kai Phillip

    2016-08-01

    We describe a generic scheme to extract critical exponents of quantum lattice models from sequences of numerical data, which is, for example, relevant for nonperturbative linked-cluster expansions or nonperturbative variants of continuous unitary transformations. The fundamental idea behind our approach is a reformulation of the numerical data sequences as a series expansion in a pseudoparameter. This allows us to utilize standard series expansion extrapolation techniques to extract critical properties such as critical points and critical exponents. The approach is illustrated for the deconfinement transition of the antiferromagnetic spin-1/2 Heisenberg chain.

  5. Extracting critical exponents for sequences of numerical data via series extrapolation techniques.

    PubMed

    Cöster, Kris; Schmidt, Kai Phillip

    2016-08-01

    We describe a generic scheme to extract critical exponents of quantum lattice models from sequences of numerical data, which is, for example, relevant for nonperturbative linked-cluster expansions or nonperturbative variants of continuous unitary transformations. The fundamental idea behind our approach is a reformulation of the numerical data sequences as a series expansion in a pseudoparameter. This allows us to utilize standard series expansion extrapolation techniques to extract critical properties such as critical points and critical exponents. The approach is illustrated for the deconfinement transition of the antiferromagnetic spin-1/2 Heisenberg chain. PMID:27627240

  6. MEGA16 - Computer program for analysis and extrapolation of stress-rupture data

    NASA Technical Reports Server (NTRS)

    Ensign, C. R.

    1981-01-01

    The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.

  7. A study of alternative schemes for extrapolation of secular variation at observatories

    USGS Publications Warehouse

    Alldredge, L.R.

    1976-01-01

    The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.

  8. Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment

    SciTech Connect

    Smith, Jordan N.

    2013-11-01

    The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.

  9. Extracting critical exponents for sequences of numerical data via series extrapolation techniques.

    PubMed

    Cöster, Kris; Schmidt, Kai Phillip

    2016-08-01

    We describe a generic scheme to extract critical exponents of quantum lattice models from sequences of numerical data, which is, for example, relevant for nonperturbative linked-cluster expansions or nonperturbative variants of continuous unitary transformations. The fundamental idea behind our approach is a reformulation of the numerical data sequences as a series expansion in a pseudoparameter. This allows us to utilize standard series expansion extrapolation techniques to extract critical properties such as critical points and critical exponents. The approach is illustrated for the deconfinement transition of the antiferromagnetic spin-1/2 Heisenberg chain.

  10. RF-sheath heat flux estimates on Tore Supra and JET ICRF antennae. Extrapolation to ITER

    SciTech Connect

    Colas, L.; Portafaix, C.; Goniche, M.; Jacquet, Ph.

    2009-11-26

    RF-sheath induced heat loads are identified from infrared thermography measurements on Tore Supra ITER-like prototype and JET A2 antennae, and are quantified by fitting thermal calculations. Using a simple scaling law assessed experimentally, the estimated heat fluxes are then extrapolated to the ITER ICRF launcher delivering 20 MW RF power for several plasma scenarios. Parallel heat fluxes up to 6.7 MW/m{sup 2} are expected very locally on ITER antenna front face. The role of edge density on operation is stressed as a trade-off between easy RF coupling and reasonable heat loads. Sources of uncertainty on the results are identified.

  11. R-matrix and Potential Model Extrapolations for NACRE Update and Extension Project

    NASA Astrophysics Data System (ADS)

    Aikawa, Masayuki; Arai, Koji; Katsuma, Masahiko; Takahashi, Kohji; Arnould, Marcel; Utsunomiya, Hiroaki

    2006-07-01

    NACRE, the `nuclear astrophysics compilation of reaction rates', has been widely utilized in stellar evolution and nucleosynthesis studies. Its update and extension programme started within a Konan-Université Libre de Bruxelles (ULB) collaboration. At the present moment, experimental data in refereed journals have been collected, and their theoretical extrapolations are being performed using the R-matrix or potential models. For the 3H(d,n)4He and 2H(p,γ)3He reactions, we present preliminary results that could well reproduce the experimental data.

  12. Model of a realistic InP surface quantum dot extrapolated from atomic force microscopy results.

    PubMed

    Barettin, Daniele; De Angelis, Roberta; Prosposito, Paolo; Auf der Maur, Matthias; Casalboni, Mauro; Pecchia, Alessandro

    2014-05-16

    We report on numerical simulations of a zincblende InP surface quantum dot (QD) on In₀.₄₈Ga₀.₅₂ buffer. Our model is strictly based on experimental structures, since we extrapolated a three-dimensional dot directly by atomic force microscopy results. Continuum electromechanical, [Formula: see text] bandstructure and optical calculations are presented for this realistic structure, together with benchmark calculations for a lens-shape QD with the same radius and height of the extrapolated dot. Interesting similarities and differences are shown by comparing the results obtained with the two different structures, leading to the conclusion that the use of a more realistic structure can provide significant improvements in the modeling of QDs fact, the remarkable splitting for the electron p-like levels of the extrapolated dot seems to prove that a realistic experimental structure can reproduce the right symmetry and a correct splitting usually given by atomistic calculations even within the multiband [Formula: see text] approach. Moreover, the energy levels and the symmetry of the holes are strongly dependent on the shape of the dot. In particular, as far as we know, their wave function symmetries do not seem to resemble to any results previously obtained with simulations of zincblende ideal structures, such as lenses or truncated pyramids. The magnitude of the oscillator strengths is also strongly dependent on the shape of the dot, showing a lower intensity for the extrapolated dot, especially for the transition between the electrons and holes ground state, as a result of a relevant reduction of the wave functions overlap. We also compare an experimental photoluminescence spectrum measured on an homogeneous sample containing about 60 dots with a numerical ensemble average derived from single dot calculations. The broader energy range of the numerical spectrum motivated us to perform further verifications, which have clarified some aspects of the experimental

  13. 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer

    NASA Technical Reports Server (NTRS)

    Lane, John

    2012-01-01

    Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has

  14. Visualization and Nowcasting for Aviation using online verified ensemble weather radar extrapolation.

    NASA Astrophysics Data System (ADS)

    Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan

    2013-04-01

    Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation

  15. R-matrix and Potential Model Extrapolations for NACRE Update and Extension Project

    SciTech Connect

    Aikawa, Masayuki; Katsuma, Masahiko; Takahashi, Kohji; Arnould, Marcel; Arai, Koji; Utsunomiya, Hiroaki

    2006-07-12

    NACRE, the 'nuclear astrophysics compilation of reaction rates', has been widely utilized in stellar evolution and nucleosynthesis studies. Its update and extension programme started within a Konan-Universite Libre de Bruxelles (ULB) collaboration. At the present moment, experimental data in refereed journals have been collected, and their theoretical extrapolations are being performed using the R-matrix or potential models. For the 3H(d,n)4He and 2H(p,{gamma})3He reactions, we present preliminary results that could well reproduce the experimental data.

  16. Extrapolation of the Dutch 1 MW tunable free electron maser to a 5 MW ECRH source

    SciTech Connect

    Caplan, M.; Nelson, S.; Kamin, G.; Antonsen, T. Levush, B.; Urbanus, W.; Tulupov, A.

    1995-04-01

    A Free Electron Maser (FEM) is now under construction at the FOM Institute (Rijnhuizen) Netherlands with the goal of producing 1 MW long pulse to CW microwave output in the range 130 GHz to 250 GHz with wall plug efficiencies of 50% (Verhoeven, et al EC-9 Conference). An extrapolated version of this device is proposed which by scaling up the beam current, would produce microwave power levels of up to 5 MW CW in order to reduce the cost per watt and increase the power per module, thus providing the fusion community with a practical ECRH source.

  17. Molecular Target Homology as a Basis for Species Extrapolation to Assess the Ecological Risk of Veterinary Drugs

    EPA Science Inventory

    Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...

  18. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    USGS Publications Warehouse

    Mueller, David S.

    2013-01-01

    profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.

  19. What Extrapolation Could Mean for Your Practice: A Legal Overview of Statistical Sampling in Overpayment and False Claims Act Cases.

    PubMed

    Salcido, Robert; Rubin, Emily

    2016-06-01

    Auditors in Medicare overpayment or False Claims Act (FCA) cases often use statistical extrapolation to estimate a health-care provider's total liability from a small sample of audited claims. Courts treat statistical extrapolation differently depending on the context. They generally afford the government substantial discretion in using statistical extrapolation in overpayment cases. By contrast, courts typically more closely scrutinize the use of extrapolation in FCA cases involving multiple damages and civil penalties to ensure that the sample truly reflects the entire universe of claims and that the extrapolation rests on a sound methodological foundation. In recent cases, however, multiple courts have allowed the use of extrapolation in FCA cases. When auditors attempt to use statistical extrapolation, providers should closely inspect the sample and challenge the extrapolation when any reasonable argument exists that the sample does not constitute a reliable or accurate representation of all the provider's claims.

  20. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    NASA Astrophysics Data System (ADS)

    Spackman, Peter R.; Karton, Amir

    2015-05-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.

  1. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    SciTech Connect

    Spackman, Peter R.; Karton, Amir

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  2. Accuracy analysis and application of extrapolation of force-free fields in solar active and quiet regions

    NASA Astrophysics Data System (ADS)

    Liu, Suo; Zhang, Hongqi; Su, Jiangtao; Song, Mutao

    2013-07-01

    In this paper, the availability, applicability and deviation of nonlinear force-free (NLFF) fields extrapolated by Approximate Vertical Integration (AVI), Boundary Integral Equation (BIE) and Optimization (Opt.) methods are studied based on the comparison with two semi-analytical fields (Low & Lou 1990). These NLFF extrapolations based on the observational vector magnetograms are used to study the spatial magnetic field in the quiet Sun.

  3. IMPEDANCE OF FINITE LENGTH RESISTOR

    SciTech Connect

    KRINSKY, S.; PODOBEDOV, B.; GLUCKSTERN, R.L.

    2005-05-15

    We determine the impedance of a cylindrical metal tube (resistor) of radius a, length g, and conductivity {sigma}, attached at each end to perfect conductors of semi-infinite length. Our main interest is in the asymptotic behavior of the impedance at high frequency, k >> 1/a. In the equilibrium regime, , the impedance per unit length is accurately described by the well-known result for an infinite length tube with conductivity {sigma}. In the transient regime, ka{sup 2} >> g, we derive analytic expressions for the impedance and wakefield.

  4. Comparison of precipitation nowcasting by extrapolation and statistical-advection methods

    NASA Astrophysics Data System (ADS)

    Sokol, Zbynek; Kitzmiller, David; Pesice, Petr; Mejsnar, Jan

    2013-04-01

    Two models for nowcasting of 1-h, 2-h and 3-h precipitation in the warm part of the year were evaluated. The first model was based on the extrapolation of observed radar reflectivity (COTREC-IPA) and the second one combined the extrapolation with the application of a statistical model (SAMR). The accuracy of the model forecasts was evaluated on independent data using the standard measures of root-mean-squared-error, absolute error, bias and correlation coefficient as well as by spatial verification methods Fractions Skill Score and SAL technique. The results show that SAMR yields slightly better forecasts during the afternoon period. On the other hand very small or no improvement is realized at night and in the very early morning. COTREC-IPA and SAMR forecast a very similar horizontal structure of precipitation patterns but the model forecasts differ in values. SAMR, similarly as COTREC-IPA, is not able to develop new storms or significantly intensify already existing storms. This is caused by a large uncertainty regarding future development. On the other hand, the SAMR model can reliably predict decreases in precipitation intensity.

  5. Edge-aware spatial-frequency extrapolation for consecutive block loss.

    PubMed

    Liu, Hao; Wang, Dengcheng; Wang, Bing; Li, Kangda; Tang, Hainie

    2016-01-01

    To improve the spatial error concealment (SEC) for consecutive block loss, an edge-aware spatial-frequency extrapolation (ESFE) algorithm and its edge-guided parametric model are proposed by selectively incorporating the Hough-based edge synthesis into the frequency-based extrapolation architecture. The dominant edges that cross the missing blocks are firstly identified by the Canny detector, and then the robust Hough transformation is utilized to systematically connect these discontinuous edges. During the generation of edge-guided parametric model, the synthesized edges are utilized to divide the missing blocks into the structure-preserving regions, and thus the residual error is reliably reduced. By successively minimizing the weighted residual error and updating the parametric model, the known samples are approximated by a set of basis functions which are distributed in a region containing both known and unknown samples. Compared with other state-of-the-art SEC algorithms, experimental results show that the proposed ESFE algorithm can achieve better reconstruction quality for consecutive block loss while keeping relatively moderate computational complexity.

  6. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer

    NASA Astrophysics Data System (ADS)

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  7. Finite-Element Extrapolation of Myocardial Structure Alterations Across the Cardiac Cycle in Rats.

    PubMed

    David Gomez, Arnold; Bull, David A; Hsu, Edward W

    2015-10-01

    Myocardial microstructures are responsible for key aspects of cardiac mechanical function. Natural myocardial deformation across the cardiac cycle induces measurable structural alteration, which varies across disease states. Diffusion tensor magnetic resonance imaging (DT-MRI) has become the tool of choice for myocardial structural analysis. Yet, obtaining the comprehensive structural information of the whole organ, in 3D and time, for subject-specific examination is fundamentally limited by scan time. Therefore, subject-specific finite-element (FE) analysis of a group of rat hearts was implemented for extrapolating a set of initial DT-MRI to the rest of the cardiac cycle. The effect of material symmetry (isotropy, transverse isotropy, and orthotropy), structural input, and warping approach was observed by comparing simulated predictions against in vivo MRI displacement measurements and DT-MRI of an isolated heart preparation at relaxed, inflated, and contracture states. Overall, the results indicate that, while ventricular volume and circumferential strain are largely independent of the simulation strategy, structural alteration predictions are generally improved with the sophistication of the material model, which also enhances torsion and radial strain predictions. Moreover, whereas subject-specific transversely isotropic models produced the most accurate descriptions of fiber structural alterations, the orthotropic models best captured changes in sheet structure. These findings underscore the need for subject-specific input data, including structure, to extrapolate DT-MRI measurements across the cardiac cycle.

  8. Downscaling and extrapolating dynamic seasonal marine forecasts for coastal ocean users

    NASA Astrophysics Data System (ADS)

    Vanhatalo, Jarno; Hobday, Alistair J.; Little, L. Richard; Spillman, Claire M.

    2016-04-01

    Marine weather and climate forecasts are essential in planning strategies and activities on a range of temporal and spatial scales. However, seasonal dynamical forecast models, that provide forecasts in monthly scale, often have low offshore resolution and limited information for inshore coastal areas. Hence, there is increasing demand for methods capable of fine scale seasonal forecasts covering coastal waters. Here, we have developed a method to combine observational data with dynamical forecasts from POAMA (Predictive Ocean Atmosphere Model for Australia; Australian Bureau of Meteorology) in order to produce seasonal downscaled, corrected forecasts, extrapolated to include inshore regions that POAMA does not cover. We demonstrate the method in forecasting the monthly sea surface temperature anomalies in the Great Australian Bight (GAB) region. The resolution of POAMA in the GAB is approximately 2° × 1° (lon. × lat.) and the resolution of our downscaled forecast is approximately 1° × 0.25°. We use data and model hindcasts for the period 1994-2010 for forecast validation. The predictive performance of our statistical downscaling model improves on the original POAMA forecast. Additionally, this statistical downscaling model extrapolates forecasts to coastal regions not covered by POAMA and its forecasts are probabilistic which allows straightforward assessment of uncertainty in downscaling and prediction. A range of marine users will benefit from access to downscaled and nearshore forecasts at seasonal timescales.

  9. Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.

    PubMed

    Le, Guigao; Oulaid, Othmane; Zhang, Junfeng

    2015-03-01

    In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.

  10. Risk extrapolation for chlorinated methanes as promoters vs initiators of multistage carcinogenesis

    SciTech Connect

    Bogen, K.T. )

    1990-01-01

    Cell-kinetic multistage (CKM) models account for clonal growth of intermediate, premalignant cell populations and thus distinguish somatic mutations and cell proliferation as separate processes that may influence observed rates of tumor formation. This paper illustrates the application of two versions of a two-stage CKM model for extrapolating cancer risk potentially associated with exposure to carbon tetrachloride, chloroform, and dichloromethane, three suspect human carcinogens commonly present in trace amounts in groundwater supplies used for domestic consumption. For each compound, the models were used to calculate a daily oral virtually safe dose' (VSD) to humans associated with a cancer risk of 10{sup {minus}6}, extrapolated from bioassay data on increased hepatocellular tumor incidence in B6C3F1 mice. Exposure-induced bioassay tumor responses were assumed first to be due solely to promotion' in accordance with the majority of available data on in vivo genotoxicity for these compounds. Available data were used to model dose response for induced hepatocellular proliferation in mice for each compound. Physiologically based pharmacokinetic models were used to predict the hepatotoxic effective dose as a function of parent compound administered dose in mice and in humans. Key issues and uncertainties in applying CKM models to risk assessment for cancer promoters are discussed.

  11. Mathematical extrapolation of image spectrum for constraint-set design and set-theoretic superresolution.

    PubMed

    Bhattacharjee, Supratik; Sundareshan, Malur K

    2003-08-01

    Several powerful iterative algorithms are being developed for the restoration and superresolution of diffraction-limited imagery data by use of diverse mathematical techniques. Notwithstanding the mathematical sophistication of the approaches used in their development and the potential for resolution enhancement possible with their implementation, the spectrum extrapolation that is central to superresolution comes in these algorithms only as a by-product and needs to be checked only after the completion of the processing steps to ensure that an expansion of the image bandwidth has indeed occurred. To overcome this limitation, a new approach of mathematically extrapolating the image spectrum and employing it to design constraint sets for implementing set-theoretic estimation procedures is described. Performance evaluation of a specific projection-onto-convex-sets algorithm by using this approach for the restoration and superresolution of degraded images is outlined. The primary goal of the method presented is to expand the power spectrum of the input image beyond the range of the sensor that captured the image.

  12. Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections.

    PubMed

    Combettes, P L

    1997-01-01

    Solving a convex set theoretic image recovery problem amounts to finding a point in the intersection of closed and convex sets in a Hilbert space. The projection onto convex sets (POCS) algorithm, in which an initial estimate is sequentially projected onto the individual sets according to a periodic schedule, has been the most prevalent tool to solve such problems. Nonetheless, POCS has several shortcomings: it converges slowly, it is ill suited for implementation on parallel processors, and it requires the computation of exact projections at each iteration. We propose a general parallel projection method (EMOPSP) that overcomes these shortcomings. At each iteration of EMOPSP, a convex combination of subgradient projections onto some of the sets is formed and the update is obtained via relaxation. The relaxation parameter may vary over an iteration-dependent, extrapolated range that extends beyond the interval [0,2] used in conventional projection methods. EMOPSP not only generalizes existing projection-based schemes, but it also converges very efficiently thanks to its extrapolated relaxations. Theoretical convergence results are presented as well as numerical simulations.

  13. Problems With the Collection and Interpretation of Asian-American Health Data: Omission, Aggregation, and Extrapolation

    PubMed Central

    Holland, Ariel T.; Palaniappan, Latha P.

    2015-01-01

    Asian-American citizens are the fastest growing racial/ethnic group in the United States. Nevertheless, data on Asian American health are scarce, and many health disparities for this population remain unknown. Much of our knowledge of Asian American health has been determined by studies in which investigators have either grouped Asian-American subjects together or examined one subgroup alone (e.g., Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese). National health surveys that collect information on Asian-American race/ethnicity frequently omit this population in research reports. When national health data are reported for Asian-American subjects, it is often reported for the aggregated group. This aggregation may mask differences between Asian-American subgroups. When health data are reported by Asian American subgroup, it is generally reported for one subgroup alone. In the Ni-Hon-San study, investigators examined cardiovascular disease in Japanese men living in Japan (Nippon; Ni), Honolulu, Hawaii (Hon), and San Francisco, CA (San). The findings from this study are often incorrectly extrapolated to other Asian-American subgroups. Recommendations to correct the errors associated with omission, aggregation, and extrapolation include: oversampling of Asian Americans, collection and reporting of race/ ethnicity data by Asian-American subgroup, and acknowledgement of significant heterogeneity among Asian American subgroups when interpreting data. PMID:22625997

  14. Finite-Element Extrapolation of Myocardial Structure Alterations Across the Cardiac Cycle in Rats

    PubMed Central

    David Gomez, Arnold; Bull, David A.; Hsu, Edward W.

    2015-01-01

    Myocardial microstructures are responsible for key aspects of cardiac mechanical function. Natural myocardial deformation across the cardiac cycle induces measurable structural alteration, which varies across disease states. Diffusion tensor magnetic resonance imaging (DT-MRI) has become the tool of choice for myocardial structural analysis. Yet, obtaining the comprehensive structural information of the whole organ, in 3D and time, for subject-specific examination is fundamentally limited by scan time. Therefore, subject-specific finite-element (FE) analysis of a group of rat hearts was implemented for extrapolating a set of initial DT-MRI to the rest of the cardiac cycle. The effect of material symmetry (isotropy, transverse isotropy, and orthotropy), structural input, and warping approach was observed by comparing simulated predictions against in vivo MRI displacement measurements and DT-MRI of an isolated heart preparation at relaxed, inflated, and contracture states. Overall, the results indicate that, while ventricular volume and circumferential strain are largely independent of the simulation strategy, structural alteration predictions are generally improved with the sophistication of the material model, which also enhances torsion and radial strain predictions. Moreover, whereas subject-specific transversely isotropic models produced the most accurate descriptions of fiber structural alterations, the orthotropic models best captured changes in sheet structure. These findings underscore the need for subject-specific input data, including structure, to extrapolate DT-MRI measurements across the cardiac cycle. PMID:26299478

  15. Optimal linear extrapolation of realizations of a stochastic process with error filtering in correlated measurements

    SciTech Connect

    Kudritskii, V.D.; Atamanyuk, I.P.; Ivashchenko, E.N.

    1995-09-01

    Control problems often require predicting the future state of the controlled plant given its present and past state. The practical relevance of such prediction problems has spurred many studies and led to the development of various methods of solution. These methods can be divided into two large directions: deductive methods, which assume that in addition to the sample the researcher also has some prior information, and inductive methods, where the main heuristic is the choice of an external performance criterion. Each of these directions has its strengths and weaknesses, and is characterized by a specific domain of application. An obvious advantage of the inductive approach is that it requires a minimum of information (in the limit, the problem is solved using a single observed realization, which is not feasible with any other method). However, the heuristic choice of the external criterion, substantially influence the accuracy of extrapolation. Deductive methods, in their turn, ensure a guaranteed, prespecified extrapolation accuracy, but their application requires preliminary, fairly time consuming and costly accumulation of empirical data about the observed phenomenon. The two main directions are mutually complementary, and the use of a particular direction in applications is mainly determined by the volume of data that have been accumulated up to the relevant time.

  16. Polyketide chain length control by chain length factor.

    PubMed

    Tang, Yi; Tsai, Shiou-Chuan; Khosla, Chaitan

    2003-10-22

    Bacterial aromatic polyketides are pharmacologically important natural products. A critical parameter that dictates product structure is the carbon chain length of the polyketide backbone. Systematic manipulation of polyketide chain length represents a major unmet challenge in natural product biosynthesis. Polyketide chain elongation is catalyzed by a heterodimeric ketosynthase. In contrast to homodimeric ketosynthases found in fatty acid synthases, the active site cysteine is absent from the one subunit of this heterodimer. The precise role of this catalytically silent subunit has been debated over the past decade. We demonstrate here that this subunit is the primary determinant of polyketide chain length, thereby validating its designation as chain length factor. Using structure-based mutagenesis, we identified key residues in the chain length factor that could be manipulated to convert an octaketide synthase into a decaketide synthase and vice versa. These results should lead to novel strategies for the engineered biosynthesis of hitherto unidentified polyketide scaffolds.

  17. Line Lengths and Starch Scores.

    ERIC Educational Resources Information Center

    Moriarty, Sandra E.

    1986-01-01

    Investigates readability of different line lengths in advertising body copy, hypothesizing a normal curve with lower scores for shorter and longer lines, and scores above the mean for lines in the middle of the distribution. Finds support for lower scores for short lines and some evidence of two optimum line lengths rather than one. (SKC)

  18. Length dependence of thermal conductivity by approach-to-equilibrium molecular dynamics

    NASA Astrophysics Data System (ADS)

    Zaoui, Hayat; Palla, Pier Luca; Cleri, Fabrizio; Lampin, Evelyne

    2016-08-01

    The length dependence of thermal conductivity over more than two orders of magnitude has been systematically studied for a range of materials, interatomic potentials, and temperatures using the atomistic approach-to-equilibrium molecular dynamics (AEMD) method. By comparing the values of conductivity obtained for a given supercell length and maximum phonon mean free path (MFP), we find that such values are strongly correlated, demonstrating that the AEMD calculation with a supercell of finite length actually probes the thermal conductivity corresponding to a maximum phonon MFP. As a consequence, the less pronounced length dependence usually observed for poorer thermal conductors, such as amorphous silica, is physically justified by their shorter average phonon MFP. Finally, we compare different analytical extrapolations of the conductivity to infinite length and demonstrate that the frequently used Matthiessen rule is not applicable in AEMD. An alternative extrapolation more suitable for transient-time, finite-supercell simulations is derived. This approximation scheme can also be used to classify the quality of different interatomic potential models with respect to their capability of predicting the experimental thermal conductivity.

  19. Improvement of the Earthquake Early Warning System with Wavefield Extrapolation with Apparent Velocity and Direction

    NASA Astrophysics Data System (ADS)

    Sato, A.; Yomogida, K.

    2014-12-01

    The early warning system operated by Japan Meteorological Agency (JMA) has been available in public since October 2007.The present system is still not effective in cases, that we cannot assume a nearly circular wavefront expansion from a source. We propose a new approach based on the extrapolation of the early observed wavefield alone without estimating its epicenter. The idea is similar to the migration method in exploration seismology, but we use not only the information of wave field at an early stage (i.e., at time T2 in Figure, but also its normal derivatives the difference between T1 and T2), that is, we utilize the apparent velocity and direction of early-stage wave propagation to predict the wavefield later (at T3 in Fig.). For the extrapolation of wavefield, we need a reliable Green's function from the observed point to a target point at which the wave arrives later. Since the complete 3-D wave propagation is extremely complex, particularly in and around Japan of highly heterogeneous structures, we shall consider a phenomenological 2-D Green's function, that is, a wavefront propagates on the surface with a certain apparent velocity and direction of P wave. This apparent velocity and direction may vary significantly depending on, for example, event depth and an area of propagation, so we examined those of P wave propagating in Japan in various situations. For example, the velocity of shallow events in Hokkaido is 7.1km/s while that in Nagano prefecture is about 5.5km/s. In addition, the apparent velocity depends on event depth, 7.1km/s for the depth of 10km and 8.9km/s for 100km in Hokkaido. We also conducted f-k array analyses of adjacent five or six stations where we can accurately estimate the apparent velocity and direction of P wave. For deep events with relatively simple waveforms, they are easily obtained, but we may need site corrections to enhance correlations of waveforms among stations for shallow ones. In the above extrapolation scheme, we can

  20. CT image construction of a totally deflated lung using deformable model extrapolation

    SciTech Connect

    Sadeghi Naini, Ali; Pierce, Greg; Lee, Ting-Yim; and others

    2011-02-15

    Purpose: A novel technique is proposed to construct CT image of a totally deflated lung from a free-breathing 4D-CT image sequence acquired preoperatively. Such a constructed CT image is very useful in performing tumor ablative procedures such as lung brachytherapy. Tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders preoperative images ineffective for targeting the tumor. Furthermore, the problem cannot be solved using intraoperative ultrasound (U.S.) images because U.S. images are very sensitive to small residual amount of air remaining in the deflated lung. One possible solution to address these issues is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intraoperative U.S. images. However, given that such preoperative images correspond to an inflated lung, such CT images need to be processed to construct CT images pertaining to the lung's deflated state. Methods: To obtain the CT images of deflated lung, we present a novel image construction technique using extrapolated deformable registration to predict the deformation the lung undergoes during full deflation. The proposed construction technique involves estimating the lung's air volume in each preoperative image automatically in order to track the respiration phase of each 4D-CT image throughout a respiratory cycle; i.e., the technique does not need any external marker to form a respiratory signal in the process of curve fitting and extrapolation. The extrapolated deformation field is then applied on a preoperative reference image in order to construct the totally deflated lung's CT image. The technique was evaluated experimentally using ex vivo porcine lung. Results: The ex vivo lung experiments led to very encouraging results. In comparison with the CT image of the deflated lung we acquired for the purpose of validation, the constructed CT image was very similar. The

  1. Quantification of sarcomere length distribution in whole muscle frozen sections.

    PubMed

    O'Connor, Shawn M; Cheng, Elton J; Young, Kevin W; Ward, Samuel R; Lieber, Richard L

    2016-05-15

    Laser diffraction (LD) is a valuable tool for measuring sarcomere length (Ls), a major determinant of muscle function. However, this method relies on few measurements per sample that are often extrapolated to whole muscle properties. Currently it is not possible to measure Ls throughout an entire muscle and determine how Ls varies at this scale. To address this issue, we developed an actuated LD scanner for sampling large numbers of sarcomeres in thick whole muscle longitudinal sections. Sections of high optical quality and fixation were produced from tibialis anterior and extensor digitorum longus muscles of Sprague-Dawley rats (N=6). Scans produced two-dimensional Ls maps, capturing >85% of the muscle area per section. Individual Ls measures generated by automatic LD and bright-field microscopy showed excellent agreement over a large Ls range (ICC>0.93). Two-dimensional maps also revealed prominent regional Ls variations across muscles. PMID:26994184

  2. An ADI extrapolated Crank-Nicolson orthogonal spline collocation method for nonlinear reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Fernandes, Ryan I.; Fairweather, Graeme

    2012-08-01

    An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only O(N) operations where N is the number of unknowns. Moreover, it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.

  3. Age of Eocene/Oligocene boundary based on extrapolation from North American microtektite layer

    SciTech Connect

    Glass, B.P.; Crosbie, J.R.

    1982-04-01

    Microtektites believed to belong to the North American tektite strewn field have been found in upper Eocene sediments in cores from nine Deep Sea Drilling Project sites in the Caribbean Sea, Gulf of Mexico, equatorial Pacific, and eastern equatorial Indian Ocean. The microtektite layer has an age of 34.2 +- 0.6 m.y. based on fission-track dating of the microtektites and K-Ar and fission-track dating of the North American tektites. Extrapolation from the microtektite layer to the overlying Eocene/Oligocene boundary indicates an age of 32.3 +- 0.9 m.y. for the Eocene/Oligocene boundary as defined at each site in the Initial Reports of the Deep Sea Drilling Project. This age is approximately 5 m.y. younger than the age of 37.5 m.y. that is generally assigned to the boundary based on recently published Cenozoic time scales. 3 figures, 5 tables.

  4. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    SciTech Connect

    Dowding, Kevin J.; Hills, Richard Guy

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  5. Multi-threaded adaptive extrapolation procedure for Feynman loop integrals in the physical region

    NASA Astrophysics Data System (ADS)

    de Doncker, E.; Yuasa, F.; Assaf, R.

    2013-08-01

    Feynman loop integrals appear in higher order corrections of interaction cross section calculations in perturbative quantum field theory. The integrals are computationally intensive especially in view of singularities which may occur within the integration domain. For the treatment of threshold and infrared singularities we developed techniques using iterated (repeated) adaptive integration and extrapolation. In this paper we describe a shared memory parallelization and its application to one- and two-loop problems, by multi-threading in the outer integrations of the iterated integral. The implementation is layered over OpenMP and retains the adaptive procedure of the sequential method exactly. We give performance results for loop integrals associated with various types of diagrams including one-loop box, pentagon, two-loop self-energy and two-loop vertex diagrams.

  6. HEAT: High accuracy extrapolated ab initio thermochemistry. III. Additional improvements and overview.

    SciTech Connect

    Harding, M. E.; Vazquez, J.; Ruscic, B.; Wilson, A. K.; Gauss, J.; Stanton, J. F.; Chemical Sciences and Engineering Division; Univ. t Mainz; The Univ. of Texas; Univ. of North Texas

    2008-01-01

    Effects of increased basis-set size as well as a correlated treatment of the diagonal Born-Oppenheimer approximation are studied within the context of the high-accuracy extrapolated ab initio thermochemistry (HEAT) theoretical model chemistry. It is found that the addition of these ostensible improvements does little to increase the overall accuracy of HEAT for the determination of molecular atomization energies. Fortuitous cancellation of high-level effects is shown to give the overall HEAT strategy an accuracy that is, in fact, higher than most of its individual components. In addition, the issue of core-valence electron correlation separation is explored; it is found that approximate additive treatments of the two effects have limitations that are significant in the realm of <1 kJ mol{sup -1} theoretical thermochemistry.

  7. Extrapolation of two-factor learning theory of infrahuman avoidance behavior to psychopathology.

    PubMed

    Levis, D J

    1981-01-01

    This paper involves a theoretical attempt to extend O. H. Mowrer's two-factor theory of infrahuman avoidance behavior to the area of human psychopathology, Central to any such theoretical extrapolation is the need to explain why human fears and avoidance behavior manifest such strong resistance to extinction while the abundance of infrahuman findings suggests that the extinction of such behaviors is rapid. The position is advanced that this noted paradox can be resolved both theoretically and empirically by modifying and extending the Solomon and Wynne conservation of anxiety hypothesis to include complex, serial ordered cues. The model presented also provides the rationale for an extinction approach to psychotherapy, referred to as implosive therapy which is briefly described. Supporting data for the model as well as alternative explanations are provided and discussed.

  8. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. PMID:23179190

  9. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    PubMed

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  10. Quantifying regional changes in terrestrial carbon storage by extrapolation from local ecosystem models

    SciTech Connect

    King, A W

    1991-12-31

    A general procedure for quantifying regional carbon dynamics by spatial extrapolation of local ecosystem models is presented Monte Carlo simulation to calculate the expected value of one or more local models, explicitly integrating the spatial heterogeneity of variables that influence ecosystem carbon flux and storage. These variables are described by empirically derived probability distributions that are input to the Monte Carlo process. The procedure provides large-scale regional estimates based explicitly on information and understanding acquired at smaller and more accessible scales.Results are presented from an earlier application to seasonal atmosphere-biosphere CO{sub 2} exchange for circumpolar ``subarctic`` latitudes (64{degree}N-90{degree}N). Results suggest that, under certain climatic conditions, these high northern ecosystems could collectively release 0.2 Gt of carbon per year to the atmosphere. I interpret these results with respect to questions about global biospheric sinks for atmospheric CO{sub 2} .

  11. On shrinkage and model extrapolation in the evaluation of clinical center performance

    PubMed Central

    Varewyck, Machteld; Goetghebeur, Els; Eriksson, Marie; Vansteelandt, Stijn

    2014-01-01

    We consider statistical methods for benchmarking clinical centers based on a dichotomous outcome indicator. Borrowing ideas from the causal inference literature, we aim to reveal how the entire study population would have fared under the current care level of each center. To this end, we evaluate direct standardization based on fixed versus random center effects outcome models that incorporate patient-specific baseline covariates to adjust for differential case-mix. We explore fixed effects (FE) regression with Firth correction and normal mixed effects (ME) regression to maintain convergence in the presence of very small centers. Moreover, we study doubly robust FE regression to avoid outcome model extrapolation. Simulation studies show that shrinkage following standard ME modeling can result in substantial power loss relative to the considered alternatives, especially for small centers. Results are consistent with findings in the analysis of 30-day mortality risk following acute stroke across 90 centers in the Swedish Stroke Register. PMID:24812420

  12. Evaluation of a thermistior power measurement system for use on the NPL antenna extrapolation range

    NASA Astrophysics Data System (ADS)

    Gentle, D. G.

    1990-05-01

    A series of tests designed to evaluate the thermistor power meters for use on the antenna extrapolation range are summarized. These power meters are used to measure the insertion loss as the separation between the antenna is increased. Procedures were written in PASCAL to read the DVM (Drive Voltage Meter) connected to the power meters via the IEEE-488 interface bus. Once these procedures were optimized for speed and accuracy, two thermistor power meters were used to measure the attenuation of a WG 16 rotary vane attenuator. This attenuator was then calibrated against the NPL (National Physics Laboratory) Mark 1 waveguide beyond cutoff attenuator and the performance of the thermistor power meter system was assessed. The uncertainties quoted are at the 95 percent confidence level unless they relate to the accuracy of DVMs, in which case they are 3 sigma values.

  13. Ordering of metal-ion toxicities in different species--extrapolation to man

    SciTech Connect

    England, M.W.; Turner, J.E.; Hingerty, B.E.; Jacobson, K.B. )

    1989-01-01

    Our previous attempts to predict the toxicities of 24 metal ions for a given species, using physicochemical parameters associated with the ions, are summarized. In our current attempt we have chosen indicators of toxicity for biological systems of increasing levels of complexity--starting with individual biological molecules and ascending to mice as representative of higher-order animals. The numerical values for these indicators have been normalized to a scale of 100 for Mg{sup 2+} (essentially nontoxic) and 0 for Cd{sup 2+} (very toxic). To give predicted toxicities to humans, extrapolations across biological species have been made for each of the metal ions considered. The predicted values are then compared with threshold limit values (TLV) from the literature. Both methods for predicting toxicities have their advantages and disadvantages, and both have limited success for metal ions. However, the second approach suggests that the TLV for Cu{sup 2+} should be lower than that currently recommended.

  14. Combating matrix effects in LC/ESI/MS: the extrapolative dilution approach.

    PubMed

    Kruve, Anneli; Leito, Ivo; Herodes, Koit

    2009-09-28

    Liquid chromatography electrospray mass spectrometry--LC/ESI/MS--a primary tool for analysis of low volatility compounds in difficult matrices--suffers from the matrix effects in the ESI ionization. It is well known that matrix effects can be reduced by sample dilution. However, the efficiency of simple sample dilution is often limited, in particular by the limit of detection of the method, and can strongly vary from sample to sample. In this study matrix effect is investigated as the function of dilution. It is demonstrated that in some cases dilution can eliminate matrix effect, but often it is just reduced. Based on these findings we propose a new quantitation method based on consecutive dilutions of the sample and extrapolation of the analyte content to the infinite dilution, i.e. to matrix-free solution. The method was validated for LC/ESI/MS analysis of five pesticides (methomyl, thiabendazole, aldicarb, imazalil, methiocarb) in five matrices (tomato, cucumber, apple, rye and garlic) at two concentration levels (0.5 and 5.0 mg kg(-1)). Agreement between the analyzed and spiked concentrations was found for all samples. It was demonstrated that in terms of accuracy of the obtained results the proposed extrapolative dilution approach works distinctly better than simple sample dilution. The main use of this approach is envisaged for (a) method development/validation to determine the extent of matrix effects and the ways of overcoming them and (b) as a second step of analysis in the case of samples having analyte contents near the maximum residue limits (MRL). PMID:19733738

  15. EVIDENCE FOR SOLAR TETHER-CUTTING MAGNETIC RECONNECTION FROM CORONAL FIELD EXTRAPOLATIONS

    SciTech Connect

    Liu, Chang; Deng, Na; Lee, Jeongwoo; Wang, Haimin; Wiegelmann, Thomas; Moore, Ronald L.

    2013-12-01

    Magnetic reconnection is one of the primary mechanisms for triggering solar eruptive events, but direct observation of this rapid process has been a challenge. In this Letter, using a nonlinear force-free field (NLFFF) extrapolation technique, we present a visualization of field line connectivity changes resulting from tether-cutting reconnection over about 30 minutes during the 2011 February 13 M6.6 flare in NOAA AR 11158. Evidence for the tether-cutting reconnection was first collected through multiwavelength observations and then by analysis of the field lines traced from positions of four conspicuous flare 1700 Å footpoints observed at the event onset. Right before the flare, the four footpoints are located very close to the regions of local maxima of the magnetic twist index. In particular, the field lines from the inner two footpoints form two strongly twisted flux bundles (up to ∼1.2 turns), which shear past each other and reach out close to the outer two footpoints, respectively. Immediately after the flare, the twist index of regions around the footpoints diminishes greatly and the above field lines become low-lying and less twisted (≲0.6 turns), overarched by loops linking the two flare ribbons formed later. About 10% of the flux (∼3 × 10{sup 19} Mx) from the inner footpoints undergoes a footpoint exchange. This portion of flux originates from the edge regions of the inner footpoints that are brightened first. These rapid changes of magnetic field connectivity inferred from the NLFFF extrapolation are consistent with the tether-cutting magnetic reconnection model.

  16. The role of de-excitation electrons in measurements with graphite extrapolation chambers.

    PubMed

    Kramer, H M; Grosswendt, B

    2002-03-01

    A method is described for determining the absorbed dose to graphite formedium energy x-rays (50-300 kV). The experimental arrangement consists of an extrapolation chamber which is part of a cylindrical graphite phantom of 30 cm diameter and 13 cm depth. The method presented is an extension of the so-called two-component model. In this model the absorbed dose to graphite is derived from the absorbed dose to the air of the cavity formed by the measuring volume. Considering separately the contributions of the absorbed dose to air in the cavity from electrons produced in Compton and photoelectric interactions this dose can be converted to the absorbed dose to graphite in the limit of zero plate separation. The extension of the two-component model proposed in this paper consists of taking into account the energy transferred to de-excitation electrons, i.e. Auger electrons, which are produced as a consequence of a photoelectric interaction or a Compton scattering process. For the system considered, these electrons have energies in the range between about 200 eV and 3 keV and hence a range in air at atmospheric pressure of 0.2 mm or less. As the amount of energy transferred to the de-excitation electrons is different per unit mass in air and in graphite, there is a region, about 0.2 mm thick, of disturbed electronic equilibrium at the graphite-to-air interface. By means of the extension proposed, the x-ray tube voltage range over which a graphite extrapolation chamber can be used is lowered from 100 kV in the case of the two-component model down to at least 50 kV.

  17. Risk extrapolation for chlorinated methanes as promoters vs initiators of multistage carcinogenesis.

    PubMed

    Bogen, K T

    1990-10-01

    "Cell-kinetic multistage" (CKM) models account for clonal growth of intermediate, premalignant cell populations and thus distinguish somatic mutations and cell proliferation as separate processes that may influence observed rates of tumor formation. This paper illustrates the application of two versions of a two-stage CKM model (one assuming exponential and the other geometric proliferation of intermediate cells) for extrapolating cancer risk potentially associated with exposure to carbon tetrachloride, chloroform, and dichloromethane, three suspect human carcinogens commonly present in trace amounts in groundwater supplies used for domestic consumption. For each compound, the models were used to calculate a daily oral "virtually safe dose" (VSD) to humans associated with a cancer risk of 10(-6), extrapolated from bioassay data on increased hepatocellular tumor incidence in B6C3F1 mice. Exposure-induced bioassay tumor responses were assumed first to be due solely to "promotion" (enhanced proliferation of premalignant cells, here associated with cytotoxicity), in accordance with the majority of available data on in vivo genotoxicity for these compounds. Available data were used to model dose response for induced hepatocellular proliferation in mice for each compound. Physiologically based pharmacokinetic models were used to predict the hepatotoxic effect (metabolized) dose as a function of parent compound administered dose in mice and in humans. Resulting calculated VSDs are shown to be from three to five orders of magnitude greater than corresponding values obtained assuming each of the compounds is carcinogenic only through induced somatic mutations within the CKM framework. Key issues and uncertainties in applying CKM models to risk assessment for cancer promoters are discussed. PMID:2258018

  18. Increasing sample size in prospective birth cohorts: back-extrapolating prenatal levels of persistent organic pollutants in newly enrolled children.

    PubMed

    Verner, Marc-André; Gaspar, Fraser W; Chevrier, Jonathan; Gunier, Robert B; Sjödin, Andreas; Bradman, Asa; Eskenazi, Brenda

    2015-03-17

    Study sample size in prospective birth cohorts of prenatal exposure to persistent organic pollutants (POPs) is limited by costs and logistics of follow-up. Increasing sample size at the time of health assessment would be beneficial if predictive tools could reliably back-extrapolate prenatal levels in newly enrolled children. We evaluated the performance of three approaches to back-extrapolate prenatal levels of p,p'-dichlorodiphenyltrichloroethane (DDT), p,p'-dichlorodiphenyldichloroethylene (DDE) and four polybrominated diphenyl ether (PBDE) congeners from maternal and/or child levels 9 years after delivery: a pharmacokinetic model and predictive models using deletion/substitution/addition or Super Learner algorithms. Model performance was assessed using the root mean squared error (RMSE), R2, and slope and intercept of the back-extrapolated versus measured levels. Super Learner outperformed the other approaches with RMSEs of 0.10 to 0.31, R2s of 0.58 to 0.97, slopes of 0.42 to 0.93 and intercepts of 0.08 to 0.60. Typically, models performed better for p,p'-DDT/E than PBDE congeners. The pharmacokinetic model performed well when back-extrapolating prenatal levels from maternal levels for compounds with longer half-lives like p,p'-DDE and BDE-153. Results demonstrate the ability to reliably back-extrapolate prenatal POP levels from levels 9 years after delivery, with Super Learner performing best based on our fit criteria. PMID:25698216

  19. Definition of Magnetic Exchange Length

    SciTech Connect

    Abo, GS; Hong, YK; Park, J; Lee, J; Lee, W; Choi, BC

    2013-08-01

    The magnetostatic exchange length is an important parameter in magnetics as it measures the relative strength of exchange and self-magnetostatic energies. Its use can be found in areas of magnetics including micromagnetics, soft and hard magnetic materials, and information storage. The exchange length is of primary importance because it governs the width of the transition between magnetic domains. Unfortunately, there is some confusion in the literature between the magnetostatic exchange length and a similar distance concerning magnetization reversal mechanisms in particles known as the characteristic length. This confusion is aggravated by the common usage of two different systems of units, SI and cgs. This paper attempts to clarify the situation and recommends equations in both systems of units.

  20. Space shuttle guidance, navigation and control equation document no. 4: Precision state and filter weighting matrix extrapolation

    NASA Technical Reports Server (NTRS)

    Robertson, W. M.

    1972-01-01

    The Precision State and Filter Weighting Matrix Extrapolation Routine is described which provides the capability to extrapolate any spacecraft geocentric state vector either backwards or forwards in time through a force field consisting of the earth's primary central-force gravitational attraction and a superimposed perturbing acceleration. The routine also provides the capability of extrapolating the filter-weighting matrix along the precision trajectory. This matrix is a square root form of the error covariance matrix and contains statistical information relative to the accuracies of the state vectors and certain other optionally estimated quantities. The routine is a cooled algorithm for the numerical solution of modified forms of the basic differential equations which are satisfied by the geocentric state vector of the spacecraft's center of mass and by the filter-weighting matrix.

  1. Extrapolated experimental critical parameters of unreflected and steel-reflected massive enriched uranium metal spherical and hemispherical assemblies

    SciTech Connect

    Rothe, R.E.

    1997-12-01

    Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments.

  2. Some initial applications of the new BEM extrapolation code for reconstructing the coronal magnetic field above solar active regions

    NASA Astrophysics Data System (ADS)

    Li, Y.; Yan, Y.; Su, J.; Devel, M.; Song, G.

    Magnetic fields play an important role in many physical events occurring in the solar atmosphere However reliable magnetic field measurements in the corona are still facing technical difficulties unconquerable today For many years photospherical magnetograms have been combined with various field extrapolation methods to reconstruct the magnetic fields in the corona under the force-free field assumption In this paper we present some initial results obtained by our recently rebuilt BEM extrapolation code for reconstructing the coronal magnetic field above the solar active regions Equipped with 10 iterative solvers of linear systems found in the SPARSKIT package the new BEM extrapolation code has the merits of efficiency and easy usage Some 3D visualization codes are also developed with which the twists and sigmoidal shapes in the reconstructed 3D magnetic fields can be illustrated more properly

  3. Persistence Length of Stable Microtubules

    NASA Astrophysics Data System (ADS)

    Hawkins, Taviare; Mirigian, Matthew; Yasar, M. Selcuk; Ross, Jennifer

    2011-03-01

    Microtubules are a vital component of the cytoskeleton. As the most rigid of the cytoskeleton filaments, they give shape and support to the cell. They are also essential for intracellular traffic by providing the roadways onto which organelles are transported, and they are required to reorganize during cellular division. To perform its function in the cell, the microtubule must be rigid yet dynamic. We are interested in how the mechanical properties of stable microtubules change over time. Some ``stable'' microtubules of the cell are recycled after days, such as in the axons of neurons or the cilia and flagella. We measured the persistence length of freely fluctuating taxol-stabilized microtubules over the span of a week and analyzed them via Fourier decomposition. As measured on a daily basis, the persistence length is independent of the contour length. Although measured over the span of the week, the accuracy of the measurement and the persistence length varies. We also studied how fluorescently-labeling the microtubule affects the persistence length and observed that a higher labeling ratio corresponded to greater flexibility. National Science Foundation Grant No: 0928540 to JLR.

  4. Hazards in determination and extrapolation of depositional rates of recent sediments

    SciTech Connect

    Isphording, W.C. . Dept. of Geology-Geography); Jackson, R.B. )

    1992-01-01

    Calculation of depositional rates for the past 250 years in estuarine sediments at sites in the Gulf of Mexico have been carried out by measuring changes that have taken place on bathymetric charts. Depositional rates during the past 50 to 100 years can similarly be estimated by this method and may be often confirmed by relatively abrupt changes at depth in the content of certain heavy metals in core samples. Analysis of bathymetric charts of Mobile Bay, Alabama, dating back to 1858, disclosed an essentially constant sedimentation rate of 3.9 mm/year. Apalachicola Bay, Florida, similarly, was found to have a rate of 5.4 mm/year. Though, in theory, these rates should provide reliable estimates of the influx of sediment into the estuaries, considerable caution must be used in attempting to extrapolate them to any depth in core samples. The passage of hurricanes in the Gulf of Mexico is a common event and can rapidly, and markedly, alter the bathymetry of an estuary. The passage of Hurricane Elena near Apalachicola Bay in 1985, for example, removed over 84 million tons of sediment from the bay and caused an average deepening of nearly 50 cm. The impact of Hurricane Frederick on Mobile Bay in 1979 was more dramatic. During the approximate 7 hour period when winds from this storm impacted the estuary, nearly 290 million tons of sediment was driven out of the bay and an average deepening of 46 cm was observed. With such weather events common in the Gulf Coast, it is not surprising that when radioactive age dating methods were used to obtain dates of approximately 7,500 years for organic remains in cores from Apalachicola Bay, that the depths at which the dated materials were obtained in the cores corresponded to depositional rates of only 0.4 mm/year, or one-tenth that obtained from historic bathymetric data. Because storm scour effects are a common occurrence in the Gulf, no attempt should be made to extrapolate bathymetric-derived rates to beyond the age of the charts.

  5. Hybrid superconducting a.c. current limiter extrapolation 63 kV-1 250 A

    NASA Astrophysics Data System (ADS)

    Tixador, P.; Levêque, J.; Brunet, Y.; Pham, V. D.

    1994-04-01

    Following the developement of a.c. superconducting wires a.c. current superconducting limiters have emerged. These limiters limit the fault currents nearly instantaneously, without detection nor order giver and may be suitable for high voltages. They are based on the natural transition from the superconducting state to the normal resistive state by overstepping the critical current of a superconducting coil which limits or triggers the limitation. Our limiter device consists essentially of two copper windings coupled through a saturable magnetic circuit and of a non inductively wound superconducting coil with a reduced current compared to the line current. This design allows a simple superconducting cable and reduced cryogenic losses but the dielectric stresses are high during faults. A small model (150 V/50 A) has experimentally validated our design. An industrial scale current limiter is designed and the comparisons between this design and other superconducting current limiters are given. Les courants de court-circuit sur les grands réseaux électriques ne cessent d'augmenter. Dans ce contexte sont apparus les limiteurs supraconducteurs de courant suite au développement des brins supraconducteurs alternatifs. Ces limiteurs peuvent limiter les courants de défaut presque instantanément, sans détection de défaut ni donneur d'ordre et ils sont extrapolables aux hautes tensions. Ils sont fondés sur la transition naturelle de l'état supraconducteur à l'état normal très résistif par dépassement du courant critique d'un enroulement supraconducteur qui limite ou déclenche la limitation. Notre limiteur est composé de deux enroulements en cuivre couplés par un circuit magnétique saturable et d'une bobine supraconductrice à courant réduit par rapport au courant de la ligne. Cette conception permet un câble supraconducteur simple et des pertes cryogéniques réduites mais les contraintes diélectriques en régime de défaut sont importantes. Une maquette

  6. When Does Length Cause the Word Length Effect?

    ERIC Educational Resources Information Center

    Jalbert, Annie; Neath, Ian; Bireta, Tamra J.; Surprenant, Aimee M.

    2011-01-01

    The word length effect, the finding that lists of short words are better recalled than lists of long words, has been termed one of the benchmark findings that any theory of immediate memory must account for. Indeed, the effect led directly to the development of working memory and the phonological loop, and it is viewed as the best remaining…

  7. Continuously variable focal length lens

    DOEpatents

    Adams, Bernhard W; Chollet, Matthieu C

    2013-12-17

    A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.

  8. Continuous lengths of oxide superconductors

    DOEpatents

    Kroeger, Donald M.; List, III, Frederick A.

    2000-01-01

    A layered oxide superconductor prepared by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon. A continuous length of a second substrate ribbon is overlaid on the first substrate ribbon. Sufficient pressure is applied to form a bound layered superconductor precursor powder between the first substrate ribbon and the second substrate ribbon. The layered superconductor precursor is then heat treated to establish the oxide superconducting phase. The layered oxide superconductor has a smooth interface between the substrate and the oxide superconductor.

  9. Overview of bunch length measurements.

    SciTech Connect

    Lumpkin, A. H.

    1999-02-19

    An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed.

  10. Hartree-Fock mass formulas and extrapolation to new mass data

    NASA Astrophysics Data System (ADS)

    Goriely, S.; Samyn, M.; Heenen, P.-H.; Pearson, J. M.; Tondeur, F.

    2002-08-01

    The two previously published Hartree-Fock (HF) mass formulas, HFBCS-1 and HFB-1 (HF-Bogoliubov), are shown to be in poor agreement with new Audi-Wapstra mass data. The problem lies first with the prescription adopted for the cutoff of the single-particle spectrum used with the δ-function pairing force, and second with the Wigner term. We find an optimal mass fit if the spectrum is cut off both above EF+15 MeV and below EF-15 MeV, EF being the Fermi energy of the nucleus in question. In addition to the Wigner term of the form VW exp(-λ|N-Z|/A) already included in the two earlier HF mass formulas, we find that a second Wigner term linear in |N-Z| leads to a significant improvement in lighter nuclei. These two features are incorporated into our new Hartree-Fock-Bogoliubov model, which leads to much improved extrapolations. The 18 parameters of the model are fitted to the 2135 measured masses for N,Z>=8 with an rms error of 0.674 MeV. With this parameter set a complete mass table, labeled HFB-2, has been constructed, going from one drip line to the other, up to Z=120. The new pairing-cutoff prescription favored by the new mass data leads to weaker neutron-shell gaps in neutron-rich nuclei.

  11. Enhancing resolution properties of array antennas via field extrapolation: application to MIMO systems

    NASA Astrophysics Data System (ADS)

    Reggiannini, Ruggero

    2015-12-01

    This paper is concerned with spatial properties of linear arrays of antennas spaced less than half wavelength. Possible applications are in multiple-input multiple-output (MIMO) wireless links for the purpose of increasing the spatial multiplexing gain in a scattering environment, as well as in other areas such as sonar and radar. With reference to a receiving array, we show that knowledge of the received field can be extrapolated beyond the actual array size by exploiting the finiteness of the interval of real directions from which the field components impinge on the array. This property permits to increase the performance of the array in terms of angular resolution. A simple signal processing technique is proposed allowing formation of a set of beams capable to cover uniformly the entire horizon with an angular resolution better than that achievable by a classical uniform-weighing half-wavelength-spaced linear array. Results are also applicable to active arrays. As the above approach leads to arrays operating in super-directive regime, we discuss all related critical aspects, such as sensitivity to external and internal noises and to array imperfections, and bandwidth, so as to identify the basic design criteria ensuring the array feasibility.

  12. Extrapolative Capability of Two Models That Estimating Soil Water Retention Curve between Saturation and Oven Dryness

    PubMed Central

    Lu, Sen; Ren, Tusheng; Lu, Yili; Meng, Ping; Sun, Shiyou

    2014-01-01

    Accurate estimation of soil water retention curve (SWRC) at the dry region is required to describe the relation between soil water content and matric suction from saturation to oven dryness. In this study, the extrapolative capability of two models for predicting the complete SWRC from limited ranges of soil water retention data was evaluated. When the model parameters were obtained from SWRC data in the 0–1500 kPa range, the FX model (Fredlund and Xing, 1994) estimations agreed well with measurements from saturation to oven dryness with RMSEs less than 0.01. The GG model (Groenevelt and Grant, 2004) produced larger errors at the dry region, with significantly larger RMSEs and MEs than the FX model. Further evaluations indicated that when SWRC measurements in the 0–100 kPa suction range was applied for model establishment, the FX model was capable of producing acceptable SWRCs across the entire water content range. For a higher accuracy, the FX model requires soil water retention data at least in the 0- to 300-kPa range to extend the SWRC to oven dryness. Comparing with the Khlosi et al. (2006) model, which requires measurements in the 0–500 kPa range to reproduce the complete SWRCs, the FX model has the advantage of requiring less SWRC measurements. Thus the FX modeling approach has the potential to eliminate the processes for measuring soil water retention in the dry range. PMID:25464503

  13. Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain

    PubMed Central

    Liu, Ke; Lv, Jialong; Dai, Yunchao; Zhang, Hong; Cao, Yingfei

    2016-01-01

    The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22) were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF) to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops) and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils. PMID:27518712

  14. A physiologically based pharmacokinetic model for quinoxaline-2-carboxylic acid in rats, extrapolation to pigs.

    PubMed

    Yang, X; Zhou, Y-F; Yu, Y; Zhao, D-H; Shi, W; Fang, B-H; Liu, Y-H

    2015-02-01

    A multi-compartment physiologically based pharmacokinetic (PBPK) model to describe the disposition of cyadox (CYX) and its metabolite quinoxaline-2-carboxylic acid (QCA) after a single oral administration was developed in rats (200 mg/kg b.w. of CYX). Considering interspecies differences in physiology and physiochemistry, the model efficiency was validated by pharmacokinetic data set in swine. The model included six compartments that were blood, muscle, liver, kidney, adipose, and a combined compartment for the rest of tissues. The model was parameterized using rat plasma and tissue concentration data that were generated from this study. Model simulations were achieved using a commercially available software program (ACSLXL ibero version 3.0.2.1). Results supported the validity of the model with simulated tissue concentrations within the range of the observations. The correlation coefficients of the predicted and experimentally determined values for plasma, liver, kidney, adipose, and muscles in rats were 0.98, 0.98, 0.98, 0.99, and 0.95, respectively. The rat model parameters were then extrapolated to pigs to estimate QCA disposition in tissues and validated by tissue concentration of QCA in swine. The correlation coefficients between the predicted and observed values were over 0.90. This model could provide a foundation for developing more reliable pig models once more data are available.

  15. The Risk of Extrapolation in Neuroanatomy: The Case of the Mammalian Vomeronasal System†

    PubMed Central

    Salazar, Ignacio; Quinteiro, Pablo Sánchez

    2009-01-01

    The sense of smell plays a crucial role in mammalian social and sexual behaviour, identification of food, and detection of predators. Nevertheless, mammals vary in their olfactory ability. One reason for this concerns the degree of development of their pars basalis rhinencephali, an anatomical feature that has been considered in classifying this group of animals as macrosmatic, microsmatic or anosmatic. In mammals, different structures are involved in detecting odours: the main olfactory system, the vomeronasal system (VNS), and two subsystems, namely the ganglion of Grüneberg and the septal organ. Here, we review and summarise some aspects of the comparative anatomy of the VNS and its putative relationship to other olfactory structures. Even in the macrosmatic group, morphological diversity is an important characteristic of the VNS, specifically of the vomeronasal organ and the accessory olfactory bulb. We conclude that it is a big mistake to extrapolate anatomical data of the VNS from species to species, even in the case of relatively close evolutionary proximity between them. We propose to study other mammalian VNS than those of rodents in depth as a way to clarify its exact role in olfaction. Our experience in this field leads us to hypothesise that the VNS, considered for all mammalian species, could be a system undergoing involution or regression, and could serve as one more integrated olfactory subsystem. PMID:19949452

  16. QSAR analysis and data extrapolation among mammals in a series of aliphatic alcohols

    SciTech Connect

    Tichy, M.; Trcka, V.; Roth, Z.; Krivucova, M.

    1985-09-01

    Concepts of QSAR analysis and biological similarity models are combined for use in extrapolation of LD/sub 50/ values after IP application of a series of aliphatic alcohols (C/sub 1/-C/sub 5/) to mouse, hamster, rat, and guinea pig and rabbit. It has been found that although close correlation exists between LD/sub 50/ values after IP and IV applications for mouse and rat, the QSAR obtained with LD/sub 50/ after IV application are not suitable for a prediction of LD/sub 50/ values after IP application for rabbit. Different transformation or distribution processes in mouse, rat, and rabbit after the two types of applications might be the reason. The LD/sub 50/ values (expressed in mmole/m/sup 2/ of body surface) seem to be independent of mammalian species used (at least within the mouse, rat, hamster, and probably guinea pig series). This fact makes it possible to predict reasonable values of LD/sub 50/ after IP application for rabbit. Expression of toxicity in mmole/m/sup 2/ of body surface may be used in toxicological studies. 24 references, 2 figures, 8 tables.

  17. Gaussian process model for extrapolation of scattering observables for complex molecules: From benzene to benzonitrile

    SciTech Connect

    Cui, Jie; Krems, Roman V.; Li, Zhiying

    2015-10-21

    We consider a problem of extrapolating the collision properties of a large polyatomic molecule A–H to make predictions of the dynamical properties for another molecule related to A–H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A–X. We assume that the effect of the −H →−X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C{sub 6}H{sub 5}CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He — C{sub 6}H{sub 6} collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C{sub 6}H{sub 5}CN with He.

  18. Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain.

    PubMed

    Liu, Ke; Lv, Jialong; Dai, Yunchao; Zhang, Hong; Cao, Yingfei

    2016-01-01

    The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22) were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF) to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops) and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils. PMID:27518712

  19. Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment

    SciTech Connect

    Burnham, A K; Weese, R K; Andrzejewski, W J

    2004-11-18

    Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.

  20. Dynamic effects of predators on cyclic voles: field experimentation and model extrapolation.

    PubMed Central

    Korpimäki, Erkki; Norrdahl, Kai; Klemola, Tero; Pettersen, Terje; Stenseth, Nils Chr

    2002-01-01

    Mechanisms generating the well-known 3-5 year cyclic fluctuations in densities of northern small rodents (voles and lemmings) have remained an ecological puzzle for decades. The hypothesis that these fluctuations are caused by delayed density-dependent impacts of predators was tested by replicated field experimentation in western Finland. We reduced densities of all main mammalian and avian predators through a 3 year vole cycle and compared vole abundances between four reduction and four control areas (each 2.5-3 km(2)). The reduction of predator densities increased the autumn density of voles fourfold in the low phase, accelerated the increase twofold, increased the autumn density of voles twofold in the peak phase, and retarded the initiation of decline of the vole cycle. Extrapolating these experimental results to their expected long-term dynamic effects through a demographic model produces changes from regular multiannual cycles to annual fluctuations with declining densities of specialist predators. This supports the findings of the field experiment and is in agreement with the predation hypothesis. We conclude that predators may indeed generate the cyclic population fluctuations of voles observed in northern Europe. PMID:12028754

  1. An extrapolation method for compressive strength prediction of hydraulic cement products

    SciTech Connect

    Siqueira Tango, C.E. de

    1998-07-01

    The basis for the AMEBA Method is presented. A strength-time function is used to extrapolate the predicted cementitious material strength for a late (ALTA) age, based on two earlier age strengths--medium (MEDIA) and low (BAIXA) ages. The experimental basis for the method is data from the IPT-Brazil laboratory and the field, including a long-term study on concrete, research on limestone, slag, and fly-ash additions, and quality control data from a cement factory, a shotcrete tunnel lining, and a grout for structural repair. The method applicability was also verified for high-performance concrete with silica fume. The formula for predicting late age (e.g., 28 days) strength, for a given set of involved ages (e.g., 28,7, and 2 days) is normally a function only of the two earlier ages` (e.g., 7 and 2 days) strengths. This equation has been shown to be independent on materials variations, including cement brand, and is easy to use also graphically. Using the AMEBA method, and only needing to know the type of cement used, it has been possible to predict strengths satisfactorily, even without the preliminary tests which are required in other methods.

  2. Applying Bayesian Maximum Entropy to extrapolating local-scale water consumption in Maricopa County, Arizona

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Jae; Wentz, Elizabeth A.

    2008-01-01

    Understanding water use in the context of urban growth and climate variability requires an accurate representation of regional water use. It is challenging, however, because water use data are often unavailable, and when they are available, they are geographically aggregated to protect the identity of individuals. The present paper aims to map local-scale estimates of water use in Maricopa County, Arizona, on the basis of data aggregated to census tracts and measured only in the City of Phoenix. To complete our research goals we describe two types of data uncertainty sources (i.e., extrapolation and downscaling processes) and then generate data that account for the uncertainty sources (i.e., soft data). Our results ascertain that the Bayesian Maximum Entropy (BME) mapping method of modern geostatistics is a theoretically sound approach for assimilating the soft data into mapping processes. Our results lead to increased mapping accuracy over classical geostatistics, which does not account for the soft data. The confirmed BME maps therefore provide useful knowledge on local water use variability in the whole county that is further applied to the understanding of causal factors of urban water demand.

  3. Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment

    SciTech Connect

    Burnham, A K; Weese, R K; Andrzejewski, W J

    2005-03-10

    Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long-term thermal aging experiments. An Arrhenius plot of the 10% level of HMX decomposition by itself from a diverse set of experiments is linear from 120 to 260 C, with an apparent activation energy of 165 kJ/mol. Similar but less extensive thermal analysis data for the mixture suggests a slightly lower activation energy for the mixture, and an analogous extrapolation is consistent with the amount of gas observed in the long-term detonator aging experiments, which is about 30 times greater than expected from HMX by itself for 50 months at 100 C. Even with this acceleration, however, it would take {approx}10,000 years to achieve 10% decomposition at {approx}30 C. Correspondingly, negligible decomposition is predicted by this kinetic model for a few decades aging at temperatures slightly above ambient. This prediction is consistent with additional sealed-tube aging experiments at 100-120 C, which are estimated to have an effective thermal dose greater than that from decades of exposure to temperatures slightly above ambient.

  4. Quality of the log-geometric distribution extrapolation for smaller undiscovered oil and gas pool size

    USGS Publications Warehouse

    Chenglin, L.; Charpentier, R.R.

    2010-01-01

    The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.

  5. Finite length Taylor Couette flow

    NASA Technical Reports Server (NTRS)

    Streett, C. L.; Hussaini, M. Y.

    1987-01-01

    Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.

  6. Incubation length of dabbling ducks

    USGS Publications Warehouse

    Wells-Berlin, A. M.; Prince, H.H.; Arnold, T.W.

    2005-01-01

    We collected unincubated eggs from wild Mallard (Anas platyrhynchos), Gadwall (A. strepera), Blue-winged Teal (A. discors), and Northern Shoveler (A. clypeata) nests and artificially incubated them at 37.5??C. Average incubation lengths of Mallard, Gadwall, and Northern Shoveler eggs did not differ from their wild-nesting counterparts, but artificially incubated Blue-winged Teal eggs required an additional 1.7 days to hatch, suggesting that wild-nesting teal incubated more effectively. A small sample of Mallard, Gadwall, and Northern Shoveler eggs artificially incubated at 38.3??C hatched 1 day sooner, indicating that incubation temperature affected incubation length. Mean incubation length of Blue-winged Teal declined by 1 day for each 11-day delay in nesting, but we found no such seasonal decline among Mallards, Gadwalls, or Northern Shovelers. There is no obvious explanation for the seasonal reduction in incubation length for Blue-winged Teal eggs incubated in a constant environment, and the phenomenon deserves further study. ?? The Cooper Ornithological Society 2005.

  7. EVALUATING TOOLS AND MODELS USED FOR QUANTITATIVE EXTRAPOLATION OF IN VITRO TO IN VIVO DATA FOR NEUROTOXICANTS*

    EPA Science Inventory

    There are a number of risk management decisions, which range from prioritization for testing to quantitative risk assessments. The utility of in vitro studies in these decisions depends on how well the results of such data can be qualitatively and quantitatively extrapolated to i...

  8. Improving the reliability of the background extrapolation in transmission electron microscopy elemental maps by using three pre-edge windows.

    PubMed

    Heil, Tobias; Gralla, Benedikt; Epping, Michael; Kohl, Helmut

    2012-07-01

    Over the last decades, elemental maps have become a powerful tool for the analysis of the spatial distribution of the elements within specimen. In energy-filtered transmission electron microscopy (EFTEM) one commonly uses two pre-edge and one post-edge image for the calculation of elemental maps. However, this so called three-window method can introduce serious errors into the extrapolated background for the post-edge window. Since this method uses only two pre-edge windows as data points to calculate a background model that depends on two fit parameters, the quality of the extrapolation can be estimated only statistically assuming that the background model is correct. In this paper, we will discuss a possibility to improve the accuracy and reliability of the background extrapolation by using a third pre-edge window. Since with three data points the extrapolation becomes over-determined, this change permits us to estimate not only the statistical uncertainly of the fit, but also the systematic error by using the experimental data. Furthermore we will discuss in this paper the acquisition parameters that should be used for the energy windows to reach an optimal signal-to-noise ratio (SNR) in the elemental maps.

  9. SPECIES DIFFERENCES IN ANDROGEN AND ESTROGEN RECEPTOR STRUCTURE AND FUNCTION AMONG VERTEBRATES AND INVERTEBRATES: INTERSPECIES EXTRAPOLATIONS REGARDING ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    Species Differences in Androgen and Estrogen Receptor Structure and Function Among Vertebrates and Invertebrates: Interspecies Extrapolations regarding Endocrine Disrupting Chemicals
    VS Wilson1, GT Ankley2, M Gooding 1,3, PD Reynolds 1,4, NC Noriega 1, M Cardon 1, P Hartig1,...

  10. Seismic Hazard and Fault Length

    NASA Astrophysics Data System (ADS)

    Black, N. M.; Jackson, D. D.; Mualchin, L.

    2005-12-01

    If mx is the largest earthquake magnitude that can occur on a fault, then what is mp, the largest magnitude that should be expected during the planned lifetime of a particular structure? Most approaches to these questions rely on an estimate of the Maximum Credible Earthquake, obtained by regression (e.g. Wells and Coppersmith, 1994) of fault length (or area) and magnitude. Our work differs in two ways. First, we modify the traditional approach to measuring fault length, to allow for hidden fault complexity and multi-fault rupture. Second, we use a magnitude-frequency relationship to calculate the largest magnitude expected to occur within a given time interval. Often fault length is poorly defined and multiple faults rupture together in a single event. Therefore, we need to expand the definition of a mapped fault length to obtain a more accurate estimate of the maximum magnitude. In previous work, we compared fault length vs. rupture length for post-1975 earthquakes in Southern California. In this study, we found that mapped fault length and rupture length are often unequal, and in several cases rupture broke beyond the previously mapped fault traces. To expand the geologic definition of fault length we outlined several guidelines: 1) if a fault truncates at young Quaternary alluvium, the fault line should be inferred underneath the younger sediments 2) faults striking within 45° of one another should be treated as a continuous fault line and 3) a step-over can link together faults at least 5 km apart. These definitions were applied to fault lines in Southern California. For example, many of the along-strike faults lines in the Mojave Desert are treated as a single fault trending from the Pinto Mountain to the Garlock fault. In addition, the Rose Canyon and Newport-Inglewood faults are treated as a single fault line. We used these more generous fault lengths, and the Wells and Coppersmith regression, to estimate the maximum magnitude (mx) for the major faults in

  11. Long-Period Tidal Variations in the Length of Day

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Erofeeva, Svetlana Y.

    2014-01-01

    A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.

  12. Development and back-extrapolation of NO2 land use regression models for historic exposure assessment in Great Britain.

    PubMed

    Gulliver, John; de Hoogh, Kees; Hansell, Anna; Vienneau, Danielle

    2013-07-16

    Modeling historic air pollution exposures is often restricted by availability of monitored concentration data. We evaluated back-extrapolation of land use regression (LUR) models for annual mean NO2 concentrations in Great Britain for up to 18 years earlier. LUR variables were created in a geographic information system (GIS) using land cover and road network data summarized within buffers, site coordinates, and altitude. Four models were developed for 2009 and 2001 using 75% of monitoring sites (in different groupings) and evaluated on the remaining 25%. Variables selected were generally stable between models. Within year, hold-out validation yielded mean-squared-error-based R(2) (MSE-R(2)) (i.e., fit around the 1:1 line) values of 0.25-0.63 and 0.51-0.65 for 2001 and 2009, respectively. Back-extrapolation was conducted for 2009 and 2001 models to 1991 and for 2009 models to 2001, adjusting to the year using two background NO2 monitoring sites. Evaluation of back-extrapolated predictions used 100% of sites from an historic national NO2 diffusion tube network (n = 451) for 1991 and 70 independent sites from automatic monitoring in 2001. Values of MSE-R(2) for back-extrapolation to 1991 were 0.42-0.45 and 0.52-0.55 for 2001 and 2009 models, respectively, but model performance varied by region. Back-extrapolation of LUR models appears valid for exposure assessment for NO2 back to 1991 for Great Britain. PMID:23763440

  13. Using composite flow laws to extrapolate lab data on ice to nature

    NASA Astrophysics Data System (ADS)

    de Bresser, Hans; Diebold, Sabrina; Durham, William

    2013-04-01

    The progressive evolution of the grain size distribution of deforming and recrystallizing Earth materials directly affects their rheological behaviour in terms of composite grain-size-sensitive (GSS, diffusion/grain boundary sliding) and grain-size-insensitive (GSI, dislocation) creep. After time, such microstructural evolution might result in strain progressing at a steady-state balance of mechanisms of GSS and GSI creep. In order to come to a meaningful rheological description of materials deforming by combined GSS and GSI mechanisms, composite flow laws are required that bring together individual, laboratory derived GSS and GSI flow laws, and that include full grain size distributions rather than single mean values representing the grain size. A composite flow law approach including grain size distributions has proven to be very useful in solving discrepancies between microstructural observations in natural calcite mylonites and extrapolations of relatively simple laboratory flow laws (Herwegh et al., 2005, J. Struct Geol., 27, 503-521). In the current study, we used previous and new laboratory data on the creep behavior of water ice to investigate if a composite flow law approach also results in better extrapolation of lab data to nature for ice. The new lab data resulted from static grain-growth experiments and from deformation experiments performed on samples with a starting grain size of either < 2 microns ("fine grained ice") or of 180-250 microns ("coarse grained ice"). The deformation experiments were performed in a special cryogenic Heard-type deformation apparatus at temperatures 180-240 K, at confining pressures 30-100 MPa, and strain rates between 1E-08/s and 1E-04/s. After the experiments, all samples were studied using cryogenic SEM and image analysis techniques. We also investigated natural microstructures in EPICA drilling ice core samples of Dronning Maud Land in Antartica. The temperature of the core ranges from 228 K at the surface to 272 K

  14. Kinetics of HMX and CP Decomposition and Their Extrapolation for Lifetime Assessment

    SciTech Connect

    Burnham, A K; Weese, R K; Adrzejewski, W J

    2006-09-11

    Accelerated aging tests play an important role in assessing the lifetime of manufactured products. There are two basic approaches to lifetime qualification. One tests a product to failure over range of accelerated conditions to calibrate a model, which is then used to calculate the failure time for conditions of use. A second approach is to test a component to a lifetime-equivalent dose (thermal or radiation) to see if it still functions to specification. Both methods have their advantages and limitations. A disadvantage of the 2nd method is that one does not know how close one is to incipient failure. This limitation can be mitigated by testing to some higher level of dose as a safety margin, but having a predictive model of failure via the 1st approach provides an additional measure of confidence. Even so, proper calibration of a failure model is non-trivial, and the extrapolated failure predictions are only as good as the model and the quality of the calibration. This paper outlines results for predicting the potential failure point of a system involving a mixture of two energetic materials, HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate). Global chemical kinetic models for the two materials individually and as a mixture are developed and calibrated from a variety of experiments. These include traditional thermal analysis experiments run on time scales from hours to a couple days, detonator aging experiments with exposures up to 50 months, and sealed-tube aging experiments for up to 5 years. Decomposition kinetics are determined for HMX (nitramine octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine) and CP (2-(5-cyanotetrazalato) pentaammine cobalt (III) perchlorate) separately and together. For high levels of thermal stress, the two materials decompose faster as a mixture than individually. This effect is observed both in high-temperature thermal analysis experiments and in long

  15. 3D Magnetic Field Configuration of the 2006 December 13 Flare Extrapolated with the Optimization Method

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ding, M. D.; Wiegelmann, T.; Li, H.

    2008-06-01

    The photospheric vector magnetic field of the active region NOAA 10930 was obtained with the Solar Optical Telescope (SOT) on board the Hinode satellite with a very high spatial resolution (about 0.3''). Observations of the two-ribbon flare on 2006 December 13 in this active region provide us a good sample to study the magnetic field configuration related to the occurrence of the flare. Using the optimization method for nonlinear force-free field (NLFFF) extrapolation proposed by Wheatland et al. and recently developed by Wiegelmann, we derive the three-dimensional (3D) vector magnetic field configuration associated with this flare. The general topology can be described as a highly sheared core field and a quasi-potential envelope arch field. The core field clearly shows some dips supposed to sustain a filament. Free energy release in the flare, calculated by subtracting the energy contained in the NLFFF and the corresponding potential field, is 2.4 × 1031 ergs, which is ~2% of the preflare potential field energy. We also calculate the shear angles, defined as the angles between the NLFFF and potential field, and find that they become larger at some particular sites in the lower atmosphere, while they become significantly smaller in most places, implying that the whole configuration gets closer to the potential field after the flare. The Ca II H line images obtained with the Broadband Filter Imager (BFI) of the SOT and the 1600 Å images with the Transition Region and Coronal Explorer (TRACE) show that the preflare heating occurs mainly in the core field. These results provide evidence in support of the tether-cutting model of solar flares.

  16. Cross-Species Extrapolation of Prediction Models for Cadmium Transfer from Soil to Corn Grain

    PubMed Central

    Yang, Hua; Li, Zhaojun; Lu, Lu; Long, Jian; Liang, Yongchao

    2013-01-01

    Cadmium (Cd) is a highly toxic heavy metal for both plants and animals. The presence of Cd in agricultural soils is of great concern regarding its transfer in the soil-plant system. This study investigated the transfer of Cd (exogenous salts) from a wide range of Chinese soils to corn grain (Zhengdan 958). Through multiple stepwise regressions, prediction models were developed, with the combination of Cd bioconcentration factor (BCF) of Zhengdan 958 and soil pH, organic matter (OM) content, and cation exchange capacity (CEC). Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the pH of the soil was the most important factor that controlled Cd uptake and lower pH was more favorable for Cd bioaccumulation in corn grain. There was no significant difference among three prediction models in the different Cd levels. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2 folds and close to the solid line of 1∶1 relationship. Furthermore, these prediction models also reduced the measured BCF intra-species variability for all non-model corn species. Therefore, the prediction models established in this study can be applied to other non-model corn species and be useful for predicting the Cd bioconcentration in corn grain and assessing the ecological risk of Cd in different soils. PMID:24324636

  17. The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.

    PubMed

    Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo

    2014-09-01

    Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus

  18. On Extrapolating Nighttime Ecosystem Respiration To Daytime Conditions and Implications for Gross Primary Productivity Estimation

    NASA Astrophysics Data System (ADS)

    Galvagno, M.; Wohlfahrt, G.

    2015-12-01

    Gross primary productivity (GPP) is a key term in the carbon cycle science. Being difficult or even impossible, at the ecosystem scale to directly quantify, various methods are used to estimate GPP, such as: eddy covariance CO2 flux partitioning, carbonyl sulfide exchange, sun-induced fluorescence, isotopes of CO2, and the photochemical reflectance index. The primary source of global GPP estimates is the FLUXNET project within which GPP is estimated in a consistent fashion through eddy covariance flux partitioning at more than 700 sites globally. Since the net ecosystem CO2 exchange (NEE) reflects net uptake during daytime, when photosynthesis exceeds respiration, and net emission during nighttime due to ecosystem respiration (RECO), the eddy covariance flux partitioning is based on the idea that daytime RECO may be inferred from nighttime NEE direct measurements, and consequently GPP can be obtained by subtracting RECO from NEE. However, the main assumption underlying this approach, which is that a temperature-dependent model of RECO parametrised based on nighttime temperatures may be extrapolated to daytime temperatures, has not been conclusively tested. This study investigates whether nighttime measurements of RECO provide unbiased estimates of daytime RECO. To this end we used ecosystem respiration chambers in a mountain grassland which, by keeping the vegetation in the dark during the measurement, allowed us to directly quantify RECO during both day and night. These data, pooled by day, night or day and night, were then used to parametrise temperature dependent models of RECO. Results show that day and night RECO do not follow the same relationship with temperature and that RECO inferred by using the nighttime parametrisation overestimates the true respiration. Potential reasons of this observed bias, like the overestimation of daytime mitochondrial respiration and implications for the quantification of GPP are discussed.

  19. Confusion about Cadmium Risks: The Unrecognized Limitations of an Extrapolated Paradigm

    PubMed Central

    Bernard, Alfred

    2015-01-01

    Background Cadmium (Cd) risk assessment presently relies on tubular proteinuria as a critical effect and urinary Cd (U-Cd) as an index of the Cd body burden. Based on this paradigm, regulatory bodies have reached contradictory conclusions regarding the safety of Cd in food. Adding to the confusion, epidemiological studies implicate environmental Cd as a risk factor for bone, cardiovascular, and other degenerative diseases at exposure levels that are much lower than points of departure used for setting food standards. Objective The objective was to examine whether the present confusion over Cd risks is not related to conceptual or methodological problems. Discussion The cornerstone of Cd risk assessment is the assumption that U-Cd reflects the lifetime accumulation of the metal in the body. The validity of this assumption as applied to the general population has been questioned by recent studies revealing that low-level U-Cd varies widely within and between individuals depending on urinary flow, urine collection protocol, and recent exposure. There is also evidence that low-level U-Cd increases with proteinuria and essential element deficiencies, two potential confounders that might explain the multiple associations of U-Cd with common degenerative diseases. In essence, the present Cd confusion might arise from the fact that this heavy metal follows the same transport pathways as plasma proteins for its urinary excretion and the same transport pathways as essential elements for its intestinal absorption. Conclusions The Cd risk assessment paradigm needs to be rethought taking into consideration that low-level U-Cd is strongly influenced by renal physiology, recent exposure, and factors linked to studied outcomes. Citation Bernard A. 2016. Confusion about cadmium risks: the unrecognized limitations of an extrapolated paradigm. Environ Health Perspect 124:1–5; http://dx.doi.org/10.1289/ehp.1509691 PMID:26058085

  20. Investigative and extrapolative role of microRNAs' genetic expression in breast carcinoma.

    PubMed

    Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida

    2016-01-01

    MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review. PMID:27375730

  1. Investigative and extrapolative role of microRNAs’ genetic expression in breast carcinoma

    PubMed Central

    Usmani, Ambreen; Shoro, Amir Ali; Shirazi, Bushra; Memon, Zahida

    2016-01-01

    MicroRNAs (miRs) are non-coding ribonucleic acids consisting of about 18-22 nucleotide bases. Expression of several miRs can be altered in breast carcinomas in comparison to healthy breast tissue, or between various subtypes of breast cancer. These are regulated as either oncogene or tumor suppressors, this shows that their expression is misrepresented in cancers. Some miRs are specifically associated with breast cancer and are affected by cancer-restricted signaling pathways e.g. downstream of estrogen receptor-α or HER2/neu. Connection of multiple miRs with breast cancer, and the fact that most of these post transcript structures may transform complex functional networks of mRNAs, identify them as potential investigative, extrapolative and predictive tumor markers, as well as possible targets for treatment. Investigative tools that are currently available are RNA-based molecular techniques. An additional advantage related to miRs in oncology is that they are remarkably stable and are notably detectable in serum and plasma. Literature search was performed by using database of PubMed, the keywords used were microRNA (52 searches) AND breast cancer (169 searches). PERN was used by database of Bahria University, this included literature and articles from international sources; 2 articles from Pakistan on this topic were consulted (one in international journal and one in a local journal). Of these, 49 articles were shortlisted which discussed relation of microRNA genetic expression in breast cancer. These articles were consulted for this review. PMID:27375730

  2. Magnetic Drug Targeting: Preclinical in Vivo Studies, Mathematical Modeling, and Extrapolation to Humans.

    PubMed

    Al-Jamal, Khuloud T; Bai, Jie; Wang, Julie Tzu-Wen; Protti, Andrea; Southern, Paul; Bogart, Lara; Heidari, Hamed; Li, Xinjia; Cakebread, Andrew; Asker, Dan; Al-Jamal, Wafa T; Shah, Ajay; Bals, Sara; Sosabowski, Jane; Pankhurst, Quentin A

    2016-09-14

    A sound theoretical rationale for the design of a magnetic nanocarrier capable of magnetic capture in vivo after intravenous administration could help elucidate the parameters necessary for in vivo magnetic tumor targeting. In this work, we utilized our long-circulating polymeric magnetic nanocarriers, encapsulating increasing amounts of superparamagnetic iron oxide nanoparticles (SPIONs) in a biocompatible oil carrier, to study the effects of SPION loading and of applied magnetic field strength on magnetic tumor targeting in CT26 tumor-bearing mice. Under controlled conditions, the in vivo magnetic targeting was quantified and found to be directly proportional to SPION loading and magnetic field strength. Highest SPION loading, however, resulted in a reduced blood circulation time and a plateauing of the magnetic targeting. Mathematical modeling was undertaken to compute the in vivo magnetic, viscoelastic, convective, and diffusive forces acting on the nanocapsules (NCs) in accordance with the Nacev-Shapiro construct, and this was then used to extrapolate to the expected behavior in humans. The model predicted that in the latter case, the NCs and magnetic forces applied here would have been sufficient to achieve successful targeting in humans. Lastly, an in vivo murine tumor growth delay study was performed using docetaxel (DTX)-encapsulated NCs. Magnetic targeting was found to offer enhanced therapeutic efficacy and improve mice survival compared to passive targeting at drug doses of ca. 5-8 mg of DTX/kg. This is, to our knowledge, the first study that truly bridges the gap between preclinical experiments and clinical translation in the field of magnetic drug targeting. PMID:27541372

  3. Method and apparatus for determining minority carrier diffusion length in semiconductors

    DOEpatents

    Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.

    1983-07-12

    Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.

  4. Welding arc length control system

    NASA Technical Reports Server (NTRS)

    Iceland, William F. (Inventor)

    1993-01-01

    The present invention is a welding arc length control system. The system includes, in its broadest aspects, a power source for providing welding current, a power amplification system, a motorized welding torch assembly connected to the power amplification system, a computer, and current pick up means. The computer is connected to the power amplification system for storing and processing arc weld current parameters and non-linear voltage-ampere characteristics. The current pick up means is connected to the power source and to the welding torch assembly for providing weld current data to the computer. Thus, the desired arc length is maintained as the welding current is varied during operation, maintaining consistent weld penetration.

  5. Variable focal length deformable mirror

    DOEpatents

    Headley, Daniel; Ramsey, Marc; Schwarz, Jens

    2007-06-12

    A variable focal length deformable mirror has an inner ring and an outer ring that simply support and push axially on opposite sides of a mirror plate. The resulting variable clamping force deforms the mirror plate to provide a parabolic mirror shape. The rings are parallel planar sections of a single paraboloid and can provide an on-axis focus, if the rings are circular, or an off-axis focus, if the rings are elliptical. The focal length of the deformable mirror can be varied by changing the variable clamping force. The deformable mirror can generally be used in any application requiring the focusing or defocusing of light, including with both coherent and incoherent light sources.

  6. Softness Correlations Across Length Scales

    NASA Astrophysics Data System (ADS)

    Ivancic, Robert; Shavit, Amit; Rieser, Jennifer; Schoenholz, Samuel; Cubuk, Ekin; Durian, Douglas; Liu, Andrea; Riggleman, Robert

    In disordered systems, it is believed that mechanical failure begins with localized particle rearrangements. Recently, a machine learning method has been introduced to identify how likely a particle is to rearrange given its local structural environment, quantified by softness. We calculate the softness of particles in simulations of atomic Lennard-Jones mixtures, molecular Lennard-Jones oligomers, colloidal systems and granular systems. In each case, we find that the length scale characterizing spatial correlations of softness is approximately a particle diameter. These results provide a rationale for why localized rearrangements--whose size is presumably set by the scale of softness correlations--might occur in disordered systems across many length scales. Supported by DOE DE-FG02-05ER46199.

  7. Hospitalization length of insanity acquittees.

    PubMed

    Steadman, H J; Pasewark, R A; Hawkins, M; Kiser, M; Bieber, S

    1983-07-01

    Used step-wise multiple regression procedures to predict length of hospitalization of 225 defendants acquitted by reason of insanity in New York state. Of the 21 variables considered, only 9 (severity of offense, sex, marital status, days prior imprisonment, homicide offense, days previous civil hospitalization, educational level, race, number of victims) contributed to the significance of the regression equation. However, these accounted for but 11% of the observed variance.

  8. Universality of modulation length exponents

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Saurish; Seidel, Alexander; Nussinov, Zohar

    2012-02-01

    We study systems (classical or quantum) with general pairwise interactions. Our prime interest is in frustrated spin systems. First, we focus on systems with a crossover temperature T^* across which the correlation function changes from exhibiting commensurate to incommensurate modulations. We report on a new exponent, νL, characterizing the universal nature of this crossover. Near the crossover, the characteristic wave-vector k on the incommensurate side differs from that on the commensurate side, q by |k-q||T-T^*|^νL. We find, in general, that νL=1/2, or in some special cases, other rational numbers. We discuss applications to the axial next nearest neighbor Ising model, Fermi systems (with application to the metal to band insulator transition) and Bose systems. Second, we obtain a universal form of the high temperature correlation function in general systems. From this, we show the existence of a diverging correlation length in the presence of long range interactions. Such a correlation length tends to the screening length in the presence of screening. We also find a way of obtaining the pairwise interaction potentials in the high temperature phase from the correlation functions.

  9. Precise Determination of the I = 2 Scattering Length from Mixed-Action Lattice QCD

    SciTech Connect

    Silas Beane; Paulo Bedaque; Thomas Luu; Konstantinos Orginos; Assumpta Parreno; Martin Savage; Aaron Torok; Andre Walker-Loud

    2008-01-01

    The I=2 pipi scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations (with fourth-rooted staggered sea quarks) at four light-quark masses. Two- and three-flavor mixed-action chiral perturbation theory at next-to-leading order is used to perform the chiral and continuum extrapolations. At the physical charged pion mass, we find m_pi a_pipi(I=2) = -0.04330 +- 0.00042, where the error bar combines the statistical and systematic uncertainties in quadrature.

  10. Diagnostic extrapolation of gross primary production from flux tower sites to the globe

    NASA Astrophysics Data System (ADS)

    Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Baldocchi, Dennis; Luyssaert, Sebastiaan; Papale, Dario

    2010-05-01

    The uptake of atmospheric CO2 by plant photosynthesis is the largest global carbon flux and is thought of driving most terrestrial carbon cycle processes. While the photosynthesis processes at the leaf and canopy levels are quite well understood, so far only very crude estimates of its global integral, the Gross Primary Production (GPP) can be found in the literature. Existing estimates have been lacking sound empirical basis. Reasons for such limitations lie in the absence of direct estimates of ecosystem-level GPP and methodological difficulties in scaling local carbon flux measurements to global scale across heterogeneous vegetation. Here, we present global estimates of GPP based on different diagnostic approaches. These up-scaling schemes integrated high-resolution remote sensing products, such as land cover, the fraction of photosynthetically active radiation (fAPAR) and leaf-area index, with carbon flux measurements from the global network of eddy covariance stations (FLUXNET). In addition, meteorological datasets from diverse sources and river runoff observations were used. All the above-mentioned approaches were also capable of estimating uncertainties. With six novel or newly parameterized and highly diverse up-scaling schemes we consistently estimated a global GPP of 122 Pg C y-1. In the quantification of the total uncertainties, we considered uncertainties arising from the measurement technique and data processing (i.e. partitioning into GPP and respiration). Furthermore, we accounted for the uncertainties of drivers and the structural uncertainties of the extrapolation approach. The total propagation led to a global uncertainty of 15 % of the mean value. Although our mean GPP estimate of 122 Pg C y-1 is similar to the previous postulate by Intergovernmental Panel on Climate Change in 2001, we estimated a different variability among ecoregions. The tropics accounted for 32 % of GPP showing a greater importance of tropical ecosystems for the global carbon

  11. Telomere length in Hepatitis C.

    PubMed

    Kitay-Cohen, Y; Goldberg-Bittman, L; Hadary, R; Fejgin, M D; Amiel, A

    2008-11-01

    Telomeres are nucleoprotein structures located at the termini of chromosomes that protect the chromosomes from fusion and degradation. Hepatocyte cell-cycle turnover may be a primary mechanism of telomere shortening in hepatitis C virus (HCV) infection, inducing fibrosis and cellular senescence. HCV infection has been recognized as potential cause of B-cell lymphoma and hepatocellular carcinoma. The present study sought to assess relative telomere length in leukocytes from patients with chronic HCV infection, patients after eradication of HCV infection (in remission), and healthy controls. A novel method of manual evaluation was applied. Leukocytes derived from 22 patients with chronic HCV infection and age- and sex-matched patients in remission and healthy control subjects were subjected to a fluorescence-in-situ protocol (DAKO) to determine telomere fluorescence intensity and number. The relative, manual, analysis of telomere length was validated against findings on applied spectral imaging (ASI) in a random sample of study and control subjects. Leukocytes from patients with chronic HCV infection had shorter telomeres than leukocytes from patients in remission and healthy controls. On statistical analysis, more cells with low signal intensity on telomere FISH had shorter telomeres whereas more cells with high signal intensity had longer telomeres. The findings were corroborated by the ASI telomere software. Telomere shortening in leukocytes from patients with active HCV infection is probably due to the lower overall telomere level rather than higher cell cycle turnover. Manual evaluation is an accurate and valid method of assessing relative telomere length between patients with chronic HCV infection and healthy subjects. PMID:18992639

  12. The NIST Length Scale Interferometer

    PubMed Central

    Beers, John S.; Penzes, William B.

    1999-01-01

    The National Institute of Standards and Technology (NIST) interferometer for measuring graduated length scales has been in use since 1965. It was developed in response to the redefinition of the meter in 1960 from the prototype platinum-iridium bar to the wavelength of light. The history of the interferometer is recalled, and its design and operation described. A continuous program of modernization by making physical modifications, measurement procedure changes and computational revisions is described, and the effects of these changes are evaluated. Results of a long-term measurement assurance program, the primary control on the measurement process, are presented, and improvements in measurement uncertainty are documented.

  13. The Length of Time's Arrow

    SciTech Connect

    Feng, Edward H.; Crooks, Gavin E.

    2008-08-21

    An unresolved problem in physics is how the thermodynamic arrow of time arises from an underlying time reversible dynamics. We contribute to this issue by developing a measure of time-symmetry breaking, and by using the work fluctuation relations, we determine the time asymmetry of recent single molecule RNA unfolding experiments. We define time asymmetry as the Jensen-Shannon divergencebetween trajectory probability distributions of an experiment and its time-reversed conjugate. Among other interesting properties, the length of time's arrow bounds the average dissipation and determines the difficulty of accurately estimating free energy differences in nonequilibrium experiments.

  14. Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.

    PubMed

    Fitzgerald, R

    2016-03-01

    The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone.

  15. Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.

    PubMed

    Fitzgerald, R

    2016-03-01

    The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944

  16. The sparse data extrapolation problem: strategies for soft-tissue correction for image-guided liver surgery

    NASA Astrophysics Data System (ADS)

    Miga, Michael I.; Dumpuri, Prashanth; Simpson, Amber L.; Weis, Jared A.; Jarnagin, William R.

    2011-03-01

    The problem of extrapolating cost-effective relevant information from distinctly finite or sparse data, while balancing the competing goals between workflow and engineering design, and between application and accuracy is the 'sparse data extrapolation problem'. Within the context of open abdominal image-guided liver surgery, one realization of this problem is compensating for non-rigid organ deformations while maintaining workflow for the surgeon. More specifically, rigid organ-based surface registration between CT-rendered liver surfaces and laser-range scanned intraoperative partial surface counterparts resulted in an average closest-point residual 6.1 +/- 4.5 mm with maximumsigned distances ranging from -13.4 to 16.2 mm. Similar to the neurosurgical environment, there is a need to correct for soft tissue deformation to translate image-guided interventions to the abdomen (e.g. liver, kidney, pancreas, etc.). While intraoperative tomographic imaging is available, these approaches are less than optimal solutions to the sparse data extrapolation problem. In this paper, we compare and contrast three sparse data extrapolation methods to that of datarich interpolation for the correction of deformation within a liver phantom containing 43 subsurface targets. The findings indicate that the subtleties in the initial alignment pose following rigid registration can affect correction up to 5- 10%. The best deformation compensation achieved was approximately 54.5% (target registration error of 2.0 +/- 1.6 mm) while the data-rich interpolative method was 77.8% (target registration error of 0.6 +/- 0.5 mm).

  17. Prediction of Pharmacokinetics and Penetration of Moxifloxacin in Human with Intra-Abdominal Infection Based on Extrapolated PBPK Model.

    PubMed

    Zhu, LiQin; Yang, JianWei; Zhang, Yuan; Wang, YongMing; Zhang, JianLei; Zhao, YuanYuan; Dong, WeiLin

    2015-03-01

    The aim of this study is to develop a physiologically based pharmacokinetic (PBPK) model in intra-abdominal infected rats, and extrapolate it to human to predict moxifloxacin pharmacokinetics profiles in various tissues in intra-abdominal infected human. 12 male rats with intra-abdominal infections, induced by Escherichia coli, received a single dose of 40 mg/kg body weight of moxifloxacin. Blood plasma was collected at 5, 10, 20, 30, 60, 120, 240, 480, 1440 min after drug injection. A PBPK model was developed in rats and extrapolated to human using GastroPlus software. The predictions were assessed by comparing predictions and observations. In the plasma concentration versus time profile of moxifloxcinin rats, Cmax was 11.151 µg/mL at 5 min after the intravenous injection and t1/2 was 2.936 h. Plasma concentration and kinetics in human were predicted and compared with observed datas. Moxifloxacin penetrated and accumulated with high concentrations in redmarrow, lung, skin, heart, liver, kidney, spleen, muscle tissues in human with intra-abdominal infection. The predicted tissue to plasma concentration ratios in abdominal viscera were between 1.1 and 2.2. When rat plasma concentrations were known, extrapolation of a PBPK model was a method to predict drug pharmacokinetics and penetration in human. Moxifloxacin has a good penetration into liver, kidney, spleen, as well as other tissues in intra-abdominal infected human. Close monitoring are necessary when using moxifloxacin due to its high concentration distribution. This pathological model extrapolation may provide reference to the PK/PD study of antibacterial agents.

  18. Improving in vitro to in vivo extrapolation by incorporating toxicokinetic measurements: a case study of lindane-induced neurotoxicity.

    PubMed

    Croom, Edward L; Shafer, Timothy J; Evans, Marina V; Mundy, William R; Eklund, Chris R; Johnstone, Andrew F M; Mack, Cina M; Pegram, Rex A

    2015-02-15

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using "faux" (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC50 for increased firing rates in primary cultures of cortical neurons was 0.6μg/ml. Media and cell lindane concentrations at the EC50 were 0.4μg/ml and 7.1μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7-1.9μg/ml and 5-11μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average=7μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC50 dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity.

  19. Asymptotic properties of mean survival estimate based on the Kaplan-Meier curve with an extrapolated tail.

    PubMed

    Gong, Qi; Fang, Liang

    2012-01-01

    Asymptotic distribution of the mean survival time based on the Kaplan-Meier curve with an extrapolated 'tail' is derived. A closed formula of the variance estimate is provided. Asymptotic properties of the estimator were studied in a simulation study, which showed that this estimator was unbiased with proper coverage probability and followed a normal distribution. An example is used to demonstrate the application of this estimator.

  20. Testing a solar coronal magnetic field extrapolation code with the Titov-Démoulin magnetic flux rope model

    NASA Astrophysics Data System (ADS)

    Jiang, Chao-Wei; Feng, Xue-Shang

    2016-01-01

    In the solar corona, the magnetic flux rope is believed to be a fundamental structure that accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of the magnetic field from boundary data has been the primary way to obtain fully three-dimensional magnetic information about the corona. As a result, the ability to reliably recover the coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code is examined with an analytical magnetic flux rope model proposed by Titov & Démoulin, which consists of a bipolar magnetic configuration holding a semi-circular line-tied flux rope in force-free equilibrium. By only using the vector field at the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field can be reconstructed with high accuracy. In particular, the magnetic topological interfaces formed between the flux rope and the surrounding arcade, i.e., the “hyperbolic flux tube” and “bald patch separatrix surface,” are also reliably reproduced. By this test, we demonstrate that our CESE-MHD-NLFFF code can be applied to recovering the magnetic flux rope in the solar corona as long as the vector magnetogram satisfies the force-free constraints.

  1. Biotransformation in vitro: An essential consideration in the quantitative in vitro-to-in vivo extrapolation (QIVIVE) of toxicity data.

    PubMed

    Wilk-Zasadna, Iwona; Bernasconi, Camilla; Pelkonen, Olavi; Coecke, Sandra

    2015-06-01

    Early consideration of the multiplicity of factors that govern the biological fate of foreign compounds in living systems is a necessary prerequisite for the quantitative in vitro-in vivo extrapolation (QIVIVE) of toxicity data. Substantial technological advances in in vitro methodologies have facilitated the study of in vitro metabolism and the further use of such data for in vivo prediction. However, extrapolation to in vivo with a comfortable degree of confidence, requires continuous progress in the field to address challenges such as e.g., in vitro evaluation of chemical-chemical interactions, accounting for individual variability but also analytical challenges for ensuring sensitive measurement technologies. This paper discusses the current status of in vitro metabolism studies for QIVIVE extrapolation, serving today's hazard and risk assessment needs. A short overview of the methodologies for in vitro metabolism studies is given. Furthermore, recommendations for priority research and other activities are provided to ensure further widespread uptake of in vitro metabolism methods in 21st century toxicology. The need for more streamlined and explicitly described integrated approaches to reflect the physiology and the related dynamic and kinetic processes of the human body is highlighted i.e., using in vitro data in combination with in silico approaches.

  2. Adsorption of pharmaceuticals onto activated carbon fiber cloths - Modeling and extrapolation of adsorption isotherms at very low concentrations.

    PubMed

    Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre

    2016-01-15

    Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. PMID:26606322

  3. Application of physiologically-based toxicokinetic modelling in oral-to-dermal extrapolation of threshold doses of cosmetic ingredients.

    PubMed

    Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W

    2014-06-16

    The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values.

  4. NMR Measures of Heterogeneity Length

    NASA Astrophysics Data System (ADS)

    Spiess, Hans W.

    2002-03-01

    Advanced solid state NMR spectroscopy provides a wealth of information about structure and dynamics of complex systems. On a local scale, multidimensional solid state NMR has elucidated the geometry and the time scale of segmental motions at the glass transition. The higher order correlation functions which are provided by this technique led to the notion of dynamic heterogeneities, which have been characterized in detail with respect to their rate memory and length scale. In polymeric and low molar mass glass formers of different fragility, length scales in the range 2 to 4 nm are observed. In polymeric systems, incompatibility of backbone and side groups as in polyalkylmethacrylates leads to heteogeneities on the nm scale, which manifest themselves in unusual chain dynamics at the glass transition involving extended chain conformations. References: K. Schmidt-Rohr and H.W. Spiess, Multidimensional Solid-State NMR and Polymers,Academic Press, London (1994). U. Tracht, M. Wilhelm, A. Heuer, H. Feng, K. Schmidt-Rohr, H.W. Spiess, Phys. Rev. Lett. 81, 2727 (1998). S.A. Reinsberg, X.H. Qiu, M. Wilhelm, M.D. Ediger, H.W. Spiess, J.Chem.Phys. 114, 7299 (2001). S.A. Reinsberg, A. Heuer, B. Doliwa, H. Zimmermann, H.W. Spiess, J. Non-Crystal. Solids, in press (2002)

  5. The P/Halley: Spatial distribution and scale lengths for C2, CN, NH2, and H2O

    NASA Technical Reports Server (NTRS)

    Fink, Uwe; Combi, Michael; Disanti, Michael A.

    1991-01-01

    From P/Halley long slit spectroscopic exposures on 12 dates, extending from Oct. 1985 to May 1986, spatial profiles were obtained for emissions by C2, CN, NH2, and OI(1D). Haser model scale lengths were fitted to these data. The extended time coverage allowed the checking for consistency between the various dates. The time varying production rate of P/Halley severely affected the profiles after perihelion, which is shown in two profile sequences on adjacent dates. Because of the time varying production rate, it was not possible to obtain reliable Haser model scale lengths after perihelion. The pre-perihelion analysis yielded Haser model scale lengths of sufficient consistency that they can be used for production rate determinations, whenever it is necessary to extrapolate from observed column densities within finite observing apertures. Results of scale lengths reduced to 1 AU are given and discussed.

  6. Improving in vitro to in vivo extrapolation by incorporating toxicokinetic measurements: A case study of lindane-induced neurotoxicity

    SciTech Connect

    Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.; Mundy, William R.; Eklund, Chris R.; Johnstone, Andrew F.M.; Mack, Cina M.; Pegram, Rex A.

    2015-02-15

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC

  7. Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT

    SciTech Connect

    Xia, Y.; Maier, A.; Berger, M.; Hornegger, J.; Bauer, S.

    2015-04-15

    Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior to a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on

  8. Demonstration of ELM pacing by Pellet Injection on DIII-D and Extrapolation to ITER

    SciTech Connect

    Baylor, Larry R; Commaux, Nicolas JC; Jernigan, Thomas C; Parks, P. B.; Evans, T.E.; Osborne, T. H.; Strait, E. J.; Fenstermacher, M. E.; Lasnier, C. J.; Moyer, R.A.; Yu, J.H.

    2010-01-01

    Demonstration of ELM pacing by pellet injection on DIII-D and extrapolation to ITER<#_ftn1>* L.R. Baylor1, N. Commaux1, T.C. Jernigan1, P.B. Parks2, T.E. Evans2, T.H. Osborne2, E.J. Strait2, M.E. Fenstermacher3, C.J. Lasnier3, R.A. Moyer4, J.H. Yu4 1Oak Ridge National Laboratory, Oak Ridge, TN, USA 2General Atomics, San Diego, CA, USA 3 Lawrence Livermore National Laboratory, Livermore, CA, USA 4University of California San Diego, La Jolla, CA, USA Deuterium pellet injection has been used in experiments on the DIII-D tokamak to investigate the possibility of triggering small rapid edge localized modes (ELMs) in reactor relevant plasma regimes. ELMs have been observed to be triggered from small 1.8 mm pellets injected from all available locations and under all H-mode operating scenarios in DIII-D. Experimental details have shown that the ELMs are triggered before the pellets reach the top of the H-mode pedestal, implying that very small shallow penetrating pellets are sufficient to trigger ELMs. Fast camera images of the pellet entering the plasma from the low field side show a single plasma filament becoming visible near the pellet cloud and striking the outer vessel wall within 200 ms. Additional ejected filaments are then observed to subsequently reach the wall. The plasma stored energy loss from the pellet triggered ELMs is a function of the elapsed time after a previous ELM. Pellet ELM pacing has been proposed as a method to prevent large ELMs that can damage the ITER plasma facing components [1]. A demonstration of pacing of ELMs on DIII-D was made by injecting slow 14 Hz pellets on the low field side in an ITER shape plasma with low natural ELM frequency and a normalized b of 1.8. The non-pellet discharge natural ELM frequency was ~5 Hz with ELM energy losses up to 85 kJ (>10% of total stored energy) while the case with pellets was able to demonstrate >20 Hz ELMs with an average ELM energy loss less than 22 kJ (<3% of the total). The resulting ELM frequency

  9. Geometry of area without length

    NASA Astrophysics Data System (ADS)

    Ho, Pei-Ming; Inami, Takeo

    2016-01-01

    To define a free string by the Nambu-Goto action, all we need is the notion of area, and mathematically the area can be defined directly in the absence of a metric. Motivated by the possibility that string theory admits backgrounds where the notion of length is not well defined but a definition of area is given, we study space-time geometries based on the generalization of a metric to an area metric. In analogy with Riemannian geometry, we define the analogues of connections, curvatures, and Einstein tensor. We propose a formulation generalizing Einstein's theory that will be useful if at a certain stage or a certain scale the metric is ill defined and the space-time is better characterized by the notion of area. Static spherical solutions are found for the generalized Einstein equation in vacuum, including the Schwarzschild solution as a special case.

  10. Evolving Patient Compliance Trends: Integrating Clinical, Insurance, and Extrapolated Socioeconomic Data

    PubMed Central

    Klobusicky, Joseph J.; Aryasomayajula, Arun; Marko, Nicholas

    2015-01-01

    Efforts toward improving patient compliance in medication focus on either identifying trends in patient features or studying changes through an intervention. Our study seeks to provide an important link between these two approaches through defining trends of evolving compliance. In addition to using clinical covariates provided through insurance claims and health records, we also extracted census based data to provide socioeconomic covariates such as income and population density. Through creating quadrants based on periods of medicine intake, we derive several novel definitions of compliance. These definitions revealed additional compliance trends through considering refill histories later in a patient’s length of therapy. These results suggested that the link between patient features and compliance includes a temporal component, and should be considered in policymaking when identifying compliant subgroups. PMID:26958212

  11. Meson-baryon scattering lengths from mixed-action lattice QCD

    SciTech Connect

    Torok, A.; Beane, S. R.; Detmold, W.; Orginos, K.; Luu, T. C.; Parreno, A.; Savage, M. J.; Walker-Loud, A.

    2010-04-01

    The {pi}{sup +{Sigma}+}, {pi}{sup +{Xi}0}, K{sup +}p, K{sup +}n, and K{sup 0{Xi}0} scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy-baryon chiral perturbation theory with two and three flavors of light quarks is used to perform the chiral extrapolations. To the order we work in the three-flavor chiral expansion, the kaon-baryon processes that we investigate show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be a{sub {pi}}{sup +}{sub {Sigma}}{sup +}=-0.197{+-}0.017 fm, and a{sub {pi}}{sup +}{sub {Xi}}{sup 0}=-0.098{+-}0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.

  12. Choroid plexus papilloma-A case highlighting the challenges of extrapolating pediatric chemotherapy regimens to adult populations.

    PubMed

    Barman, Stephen L; Jean, Gary W; Dinsfriend, William M; Gerber, David E

    2016-02-01

    The treatment of adults who present with rare pediatric tumors is not characterized well in the literature. We report an instance of a 40-year-old African American woman with a diagnosis of choroid plexus carcinoma admitted to the intensive care unit for severe sepsis seven days after receiving chemotherapy consisting of carboplatin (350 mg/m(2) on Days 1 and 2 plus etoposide 100 mg/m(2) on Days 1-5). Her laboratory results were significant for an absolute neutrophil count of 0/µL and blood cultures positive for Capnocytophagia species. She was supported with broad spectrum antibiotics and myeloid growth factors. She eventually recovered and was discharged in stable condition. The management of adults with malignancies most commonly seen in pediatric populations presents substantial challenges. There are multiple age-specific differences in renal and hepatic function that explain the need for higher dosing in pediatric patients without increasing the risk of toxicity. Furthermore, differences in pharmacokinetic parameters such as absorption, distribution, and clearance are present but are less likely to affect patients. It is expected that the pediatric population will have more bone marrow reserve and, therefore, less susceptible to myelosuppression. The extrapolation of pediatric dosing to an adult presents a problematic situation in treating adults with malignancies that primarily effect pediatric patients. We recommend extrapolating from adult treatment regimens with similar agents rather than extrapolating from pediatric treatment regimens to reduce the risk of toxicity. We also recommend the consideration of adding myeloid growth factors. If the treatment is tolerated without significant toxicity, dose escalation can be considered.

  13. EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE

    SciTech Connect

    Jiang Chaowei; Feng Xueshang E-mail: fengx@spaceweather.ac.cn

    2013-06-01

    Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.

  14. Is the climate right for pleistocene rewilding? Using species distribution models to extrapolate climatic suitability for mammals across continents.

    PubMed

    Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S

    2010-01-01

    Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response

  15. Extrapolation chamber mounted on perspex for calibration of high energy photon and electron beams from a clinical linear accelerator

    PubMed Central

    Ravichandran, R.; Binukumar, J. P.; Sivakumar, S. S.; Krishnamurthy, K.; Davis, C. A.

    2009-01-01

    The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC) used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water) the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor variations in dose

  16. Lowrank finite-differences and lowrank Fourier finite-differences for seismic wave extrapolation in the acoustic approximation

    NASA Astrophysics Data System (ADS)

    Song, Xiaolei; Fomel, Sergey; Ying, Lexing

    2013-05-01

    We introduce a novel finite-difference (FD) approach for seismic wave extrapolation in time. We derive the coefficients of the finite-difference operator from a lowrank approximation of the space-wavenumber, wave-propagator matrix. Applying the technique of lowrank finite-differences, we also improve the finite difference scheme of the two-way Fourier finite differences (FFD). We call the new operator lowrank Fourier finite differences (LFFD). Both the lowrank FD and lowrank FFD methods can be applied to enhance accuracy in seismic imaging by reverse-time migration. Numerical examples confirm the validity of the proposed technique.

  17. Interpolation and extrapolation problems of multivariate regression in analytical chemistry: benchmarking the robustness on near-infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2012-04-01

    Modern analytical chemistry of industrial products is in need of rapid, robust, and cheap analytical methods to continuously monitor product quality parameters. For this reason, spectroscopic methods are often used to control the quality of industrial products in an on-line/in-line regime. Vibrational spectroscopy, including mid-infrared (MIR), Raman, and near-infrared (NIR), is one of the best ways to obtain information about the chemical structures and the quality coefficients of multicomponent mixtures. Together with chemometric algorithms and multivariate data analysis (MDA) methods, which were especially created for the analysis of complicated, noisy, and overlapping signals, NIR spectroscopy shows great results in terms of its accuracy, including classical prediction error, RMSEP. However, it is unclear whether the combined NIR + MDA methods are capable of dealing with much more complex interpolation or extrapolation problems that are inevitably present in real-world applications. In the current study, we try to make a rather general comparison of linear, such as partial least squares or projection to latent structures (PLS); "quasi-nonlinear", such as the polynomial version of PLS (Poly-PLS); and intrinsically non-linear, such as artificial neural networks (ANNs), support vector regression (SVR), and least-squares support vector machines (LS-SVM/LSSVM), regression methods in terms of their robustness. As a measure of robustness, we will try to estimate their accuracy when solving interpolation and extrapolation problems. Petroleum and biofuel (biodiesel) systems were chosen as representative examples of real-world samples. Six very different chemical systems that differed in complexity, composition, structure, and properties were studied; these systems were gasoline, ethanol-gasoline biofuel, diesel fuel, aromatic solutions of petroleum macromolecules, petroleum resins in benzene, and biodiesel. Eighteen different sample sets were used in total. General

  18. 28 CFR 551.4 - Hair length.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...

  19. 28 CFR 551.4 - Hair length.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...

  20. 28 CFR 551.4 - Hair length.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...

  1. 28 CFR 551.4 - Hair length.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Hair length. 551.4 Section 551.4 Judicial... Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b) The Warden shall require an inmate with long hair to wear a cap or hair net when working in food service...

  2. 7 CFR 51.610 - Midrib length.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Midrib length. 51.610 Section 51.610 Agriculture... Consumer Standards for Celery Stalks Definitions § 51.610 Midrib length. Midrib length of a branch means the distance between the point of attachment to the root and the first node....

  3. 7 CFR 51.610 - Midrib length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Midrib length. 51.610 Section 51.610 Agriculture... Consumer Standards for Celery Stalks Definitions § 51.610 Midrib length. Midrib length of a branch means the distance between the point of attachment to the root and the first node....

  4. 28 CFR 551.4 - Hair length.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Hair length. 551.4 Section 551.4 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INSTITUTIONAL MANAGEMENT MISCELLANEOUS Grooming § 551.4 Hair length. (a) The Warden may not restrict hair length if the inmate keeps it neat and clean. (b)...

  5. Calculation of the {pi}{sup +}{Sigma}{sup +} and {pi}{sup +}{Xi}{sup 0} Scattering Lengths in Lattice QCD

    SciTech Connect

    Torok, Aaron

    2011-10-24

    The {pi}{sup +}{Sigma}{sup +} and {pi}{sup +}{Xi}{sup 0} scattering lengths were calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks was used to perform the chiral extrapolations. To NNLO in the three-flavor chiral expansion, the kaon-baryon processes that were investigated show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be a{sub {pi}}{sup +}{sub {Sigma}}{sup +} = -0.197{+-}0.017 fm, and a{sub {pi}}{sup +}{sub {Xi}}{sup 0} = -0.098{+-}0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.

  6. Size Scaling in Western North Atlantic Loggerhead Turtles Permits Extrapolation between Regions, but Not Life Stages

    PubMed Central

    Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko

    2015-01-01

    Introduction Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results and Discussion Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal. PMID:26629702

  7. EVOLUTION OF A MAGNETIC FLUX ROPE AND ITS OVERLYING ARCADE BASED ON NONLINEAR FORCE-FREE FIELD EXTRAPOLATIONS

    SciTech Connect

    Jing, Ju; Liu, Chang; Lee, Jeongwoo; Wang, Shuo; Xu, Yan; Wang, Haimin; Wiegelmann, Thomas

    2014-03-20

    Dynamic phenomena indicative of slipping reconnection and magnetic implosion were found in a time series of nonlinear force-free field (NLFFF) extrapolations for the active region 11515, which underwent significant changes in the photospheric fields and produced five C-class flares and one M-class flare over five hours on 2012 July 2. NLFFF extrapolation was performed for the uninterrupted 5 hour period from the 12 minute cadence vector magnetograms of the Helioseismic and Magnetic Imager on board the Solar Dynamic Observatory. According to the time-dependent NLFFF model, there was an elongated, highly sheared magnetic flux rope structure that aligns well with an Hα filament. This long filament splits sideways into two shorter segments, which further separate from each other over time at a speed of 1-4 km s{sup –1}, much faster than that of the footpoint motion of the magnetic field. During the separation, the magnetic arcade arching over the initial flux rope significantly decreases in height from ∼4.5 Mm to less than 0.5 Mm. We discuss the reality of this modeled magnetic restructuring by relating it to the observations of the magnetic cancellation, flares, a filament eruption, a penumbra formation, and magnetic flows around the magnetic polarity inversion line.

  8. The effect of surface roughness on extrapolation from thickness C-scan data using extreme value theory

    NASA Astrophysics Data System (ADS)

    Benstock, Daniel; Cegla, Frederic

    2015-03-01

    Ultrasonic thickness C-scans are a key tool in the assessment of the condition of engineering components. C-scans provide information of the wall thickness over the entire inspected area. Full inspection of a component is time consuming, costly and sometimes impossible due to obstacles. Therefore, the condition of the whole structure is often estimated by extrapolation of data from a small sample where C-scan information is available. Extreme value theory (EVT) provides a framework by which one can extrapolate to the size of the worst case defect from a small inspected sample area of a component. The framework and assumptions of EVT are discussed, with experimental and simulated examples. The influence of both the surface roughness and the timing algorithm, used to extract thickness measurements from the collected ultrasonic signals, is also analyzed. It can be shown that for uniformly rough surfaces the C-scan data can lead to conservative estimates of the size of the worst case defect.

  9. Investigation into the validity of extrapolation in setting maximum residue levels for pesticides in crops of similar morphology.

    PubMed

    Reynolds, S L; Fussell, R J; MacArthur, R

    2005-01-01

    Field trials were initiated to investigate if extrapolation procedures, which were adopted to limit costs of pesticide registration for minor crops, are valid. Three pairs of crops of similar morphology; carrots/swedes, cauliflower/calabrese (broccoli) and French beans/edible-podded peas; were grown in parallel at four different geographical locations within the UK. The crops were treated with both systemic and non-systemic pesticides under maximum registered use conditions, i.e. the maximum permitted application rates and the minimum harvest intervals. Once mature, the crops were harvested and analysed for residues of the applied pesticides. The limits of quantification were in the range 0.005-0.02 mg kg(-1). Analysis of variance and bootstrap estimates showed that in general, the mean residue concentrations for the individual pesticides were significantly different between crop pairs grown on each site. Similarly, the mean residue concentrations of most of the pesticides in each crop across sites were significantly different. These findings demonstrate that the extrapolations of residue levels for most of the selected pesticide/crop combinations investigated; chlorfenvinphos and iprodione from carrots to swedes; carbendazim, chlorpyrifos, diflubenzuron and dimethoate from cauliflower to calabrese; and malathion, metalaxyl and pirimicarb from French beans to edible-podded peas; appear invalid. PMID:15895609

  10. Extrapolation of gas-reserve growth potential: Development of examples from macro approaches. Final report, October 1990-November 1991

    SciTech Connect

    Jackson, M.L.W.; Finley, R.J.

    1992-08-01

    An analysis of infield completions and reserve growth potential was made in Tertiary nonassociated gas reservoirs in South Texas. Infield well completions were defined from a concurrent GRI project involving macro-scale prediction of reserve growth. The report validates 78 percent, or 5.6 Tcf, of a high-end infill estimate of 7.2 Tcf for nine stratigraphic units in South Texas. This is a significant resource volume given the historical expectation that natural gas can be efficiently drained with widely spaced wells (1 or 2 per square mile) in conventional reservoirs. Groups of infield completions, or reservoir sections, from Frio, Vicksburg, Wilcox, and Miocene reservoirs were examined using geophysical well logs and production and pressure analyses. Seven reservoir-section types that contributed to the macro reserve growth estimate were evaluated. About 20 percent of the estimate consists of gas volumes extrapolated using consolidated reservoir groups, cycled reservoirs with invalid data. Additional gas volumes in the estimate were extrapolated from reservoir sections representing rate acceleration. The estimate also includes reservoir volumes from the low-permeability Wilcox Lobo trend, where limited drainage radii lead to expected reserve growth. Volumes that represent within-reservoir reserve growth and volumes that represent shallower- or deeper-pool reservoirs determined not to be in pressure communication with preceding completions in a reservoir section formed most of the macro reserve growth estimate.

  11. Human plasma concentrations of cytochrome P450 probes extrapolated from pharmacokinetics in cynomolgus monkeys using physiologically based pharmacokinetic modeling.

    PubMed

    Shida, Satomi; Utoh, Masahiro; Murayama, Norie; Shimizu, Makiko; Uno, Yasuhiro; Yamazaki, Hiroshi

    2015-01-01

    1. Cynomolgus monkeys are widely used in preclinical studies as non-human primate species. Pharmacokinetics of human cytochrome P450 probes determined in cynomolgus monkeys after single oral or intravenous administrations were extrapolated to give human plasma concentrations. 2. Plasma concentrations of slowly eliminated caffeine and R-/S-warfarin and rapidly eliminated omeprazole and midazolam previously observed in cynomolgus monkeys were scaled to human oral biomonitoring equivalents using known species allometric scaling factors and in vitro metabolic clearance data with a simple physiologically based pharmacokinetic (PBPK) model. Results of the simplified human PBPK models were consistent with reported experimental PK data in humans or with values simulated by a fully constructed population-based simulator (Simcyp). 3. Oral administrations of metoprolol and dextromethorphan (human P450 2D probes) in monkeys reportedly yielded plasma concentrations similar to their quantitative detection limits. Consequently, ratios of in vitro hepatic intrinsic clearances of metoprolol and dextromethorphan determined in monkeys and humans were used with simplified PBPK models to extrapolate intravenous PK in monkeys to oral PK in humans. 4. These results suggest that cynomolgus monkeys, despite their rapid clearance of some human P450 substrates, could be a suitable model for humans, especially when used in conjunction with simple PBPK models.

  12. Enhancement of low-quality reconstructed digital hologram images based on frequency extrapolation of large objects under the diffraction limit

    NASA Astrophysics Data System (ADS)

    Liu, Ning; Li, Weiliang; Zhao, Dongxue

    2016-06-01

    During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low-quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.

  13. Determination of the most appropriate method for extrapolating overall survival data from a placebo-controlled clinical trial of lenvatinib for progressive, radioiodine-refractory differentiated thyroid cancer

    PubMed Central

    Tremblay, Gabriel; Livings, Christopher; Crowe, Lydia; Kapetanakis, Venediktos; Briggs, Andrew

    2016-01-01

    Background Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. Objectives This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS) in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Methods Data from 392 patients (lenvatinib: 261, placebo: 131) from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. Results A piecewise model, in which the Kaplan–Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. Conclusion In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and populate future cost-effectiveness analyses. PMID:27418847

  14. Validity of fascicle length estimation in the vastus lateralis and vastus intermedius using ultrasonography.

    PubMed

    Ando, Ryosuke; Taniguchi, Keigo; Saito, Akira; Fujimiya, Mineko; Katayose, Masaki; Akima, Hiroshi

    2014-04-01

    The purpose of this study was to determine the validity of fascicle length estimation in the vastus lateralis (VL) and vastus intermedius (VI) using ultrasonography. The fascicle lengths of the VL and VI muscles were measured directly (dFL) using calipers, and were estimated (estmFL) using ultrasonography, in 10 legs from five Thiel's embalmed cadavers. To determine the validity of the estmFLs, FL was estimated using five previously published models and compared with dFL. The intraclass correlation coefficients (ICCs) of two of the five models were>0.75, indicating that these estimates were valid. Both of these models combined measurement of the length of the visible part of the fascicle with linear extrapolation of the length of the part of the fascicle that was not visible on the sonographic image. The ICCs and absolute% difference were best in models that used appropriate pennation angles. These results suggest that two of the five previously published models are valid for obtaining estmFL of the VL and VI using ultrasonography.

  15. A methodology for direct quantification of over-ranging length in helical computed tomography with real-time dosimetry.

    PubMed

    Tien, Christopher J; Winslow, James F; Hintenlang, David E

    2011-01-01

    In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths

  16. Extrapolating soil redistribution rates estimated from 137Cs to catchment scale in a complex agroforestry landscape using GIS

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; López-Vicente, Manuel; Palazón, Leticia; Quijano, Laura; Navas, Ana

    2015-04-01

    The use of fallout radionuclides, particularly 137Cs, in soil erosion investigations has been successfully used over a range of different landscapes. This technique provides mean annual values of spatially distributed soil erosion and deposition rates for the last 40-50 years. However, upscaling the data provided by fallout radionuclides to catchment level is required to understand soil redistribution processes, to support catchment management strategies, and to assess the main soil erosion factors like vegetation cover or topography. In recent years, extrapolating field scale soil erosion rates estimated from 137Cs data to catchment scale has been addressed using geostatistical interpolation and Geographical Information Systems (GIS). This study aims to assess soil redistribution in an agroforestry catchment characterized by abrupt topography and an intricate mosaic of land uses using 137Cs data and GIS. A new methodological approach using GIS is presented as an alternative of interpolation tools to extrapolating soil redistribution rates in complex landscapes. This approach divides the catchment into Homogeneous Physiographic Units (HPUs) based on unique land use, hydrological network and slope value. A total of 54 HPUs presenting specific land use, strahler order and slope combinations, were identified within the study area (2.5 km2) located in the north of Spain. Using 58 soil erosion and deposition rates estimated from 137Cs data, we were able to characterize the predominant redistribution processes in 16 HPUs, which represent the 78% of the study area surface. Erosion processes predominated in 6 HPUs (23%) which correspond with cultivated units in which slope and strahler order is moderate or high, and with scrubland units with high slope. Deposition was predominant in 3 HPUs (6%), mainly in riparian areas, and to a lesser extent in forest and scrubland units with low slope and low and moderate strahler order. Redistribution processes, both erosion and

  17. Diffusion length and solar cell efficiency

    NASA Astrophysics Data System (ADS)

    Huber, D.; Wahlich, R.; Bachmaier, A.

    The diffusion length of the minority carriers of a solar cell defines the appropriate technology which should be applied for the solar cell fabrication. Back surface techniques only pay off if the diffusion length is long enough. Monocrystalline material with different lifetime killing defects was investigated and an experimental correlation between the diffusion length measured on the unprocessed wafer and the efficiency of the finished cell could be established.

  18. Controlling Arc Length in Plasma Welding

    NASA Technical Reports Server (NTRS)

    Iceland, W. F.

    1986-01-01

    Circuit maintains arc length on irregularly shaped workpieces. Length of plasma arc continuously adjusted by control circuit to maintain commanded value. After pilot arc is established, contactor closed and transfers arc to workpiece. Control circuit then half-wave rectifies ac arc voltage to produce dc control signal proportional to arc length. Circuit added to plasma arc welding machines with few wiring changes. Welds made with circuit cleaner and require less rework than welds made without it. Beads smooth and free of inclusions.

  19. Measuring Crack Length in Coarse Grain Ceramics

    NASA Technical Reports Server (NTRS)

    Salem, Jonathan A.; Ghosn, Louis J.

    2010-01-01

    Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.

  20. Hadron-hadron interactions from N f = 2 + 1 + 1 lattice QCD: isospin-2 ππ scattering length

    NASA Astrophysics Data System (ADS)

    Helmes, C.; Jost, C.; Knippschild, B.; Liu, L.; Urbach, C.; Ueding, M.; Werner, M.; Liu, C.; Liu, J.; Wang, Z.

    2015-09-01

    We present results for the I = 2 ππ scattering length using N f = 2 + 1 + 1 twisted mass lattice QCD for three values of the lattice spacing and a range of pion mass values. Due to the use of Laplacian Heaviside smearing our statistical errors are reduced compared to previous lattice studies. A detailed investigation of systematic effects such as discretisation effects, volume effects, and pollution of excited and thermal states is performed. After extrapolation to the physical point using chiral perturbation theory at NLO we obtain M π a 0 = - 0.0442(2)stat( - 0 + 4 )sys. [Figure not available: see fulltext.

  1. Precise determination of the I=2 {pi}{pi} scattering length from mixed-action lattice QCD

    SciTech Connect

    Beane, Silas R.; Torok, Aaron; Luu, Thomas C.; Orginos, Kostas; Parreno, Assumpta; Savage, Martin J.; Walker-Loud, Andre

    2008-01-01

    The I=2 {pi}{pi} scattering length is calculated in fully dynamical lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations (with fourth-rooted staggered sea quarks) at four light-quark masses. Two- and three-flavor mixed-action chiral perturbation theory at next-to-leading order is used to perform the chiral and continuum extrapolations. At the physical charged pion mass, we find m{sub {pi}}a{sub {pi}}{sub {pi}}{sup I=2}=-0.043 30{+-}0.000 42, where the error bar combines the statistical and systematic uncertainties in quadrature.

  2. The vector effective length of slot antennas

    NASA Astrophysics Data System (ADS)

    Wunsch, A. D.

    1991-05-01

    A suitable definition for the vector effective length of an arbitrary slot receiving antenna placed in a large conducting plane is presented, and a general formula is obtained for its derivation. To derive the definition and formula is, the effective length of a general wire antenna is derived, and then an analogous method is applied to the slot problem. The relationship of the slot's effective length to that for a flat strip wire antenna driven by a specified current is obtained. Formulas for the lengths of some specific common slot antennas are derived from the general expression. The current sampling property of a small straight slot is discussed.

  3. Generalizations of Brandl's theorem on Engel length

    NASA Astrophysics Data System (ADS)

    Quek, S. G.; Wong, K. B.; Wong, P. C.

    2013-04-01

    Let n < m be positive integers such that [g,nh] = [g,mh] and assume that n and m are chosen minimal with respect to this property. Let gi = [g,n+ih] where i = 1,2,…,m-n. Then π(g,h) = (g1,…,gm-n) is called the Engel cycle generated by g and h. The length of the Engel cycle is m-n. A group G is said to have Engel length r, if all the length of the Engel cycles in G divides r. In this paper we discuss the Brandl's theorem on Engel length and give some of its generalizations.

  4. Pi Bond Orders and Bond Lengths

    ERIC Educational Resources Information Center

    Herndon, William C.; Parkanyi, Cyril

    1976-01-01

    Discusses three methods of correlating bond orders and bond lengths in unsaturated hydrocarbons: the Pauling theory, the Huckel molecular orbital technique, and self-consistent-field techniques. (MLH)

  5. Multiple path length dual polarization interferometry.

    PubMed

    Coffey, Paul D; Swann, Marcus J; Waigh, Thomas A; Schedin, Fred; Lu, Jian R

    2009-06-22

    An optical sensor for quantitative analysis of ultrathin films and adsorbed layers is described. Quantification of both layer thickness and refractive index (density) can be made for in situ and ex-situ coated films. With the use of two polarizations, in situ measurements are made via one path length in a young's interferometer arrangement while ex-situ measurements use multiple path lengths. The multiple path length young's interferometer arrangement is embodied in a solid state waveguide configuration called the multiple path length dual polarization interferometer (MPL-DPI). The technique is demonstrated with ultrathin layers of poly(methylmethacrylate) and human serum albumin.

  6. Hematological responses after inhaling {sup 238}PuO{sub 2}: An extrapolation from beagle dogs to humans

    SciTech Connect

    Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.

    1994-11-01

    The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.

  7. The application of metal artifact reduction (MAR) in CT scans for radiation oncology by monoenergetic extrapolation with a DECT scanner.

    PubMed

    Schwahofer, Andrea; Bär, Esther; Kuchenbecker, Stefan; Grossmann, J Günter; Kachelrieß, Marc; Sterzing, Florian

    2015-12-01

    Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al=2.7 g/cm(3)), titanium (ρ Ti=4.5 g/cm(3)), steel (ρ steel=7.9 g/cm(3)) and tungsten (ρ W=19.3g/cm(3)) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV(Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ=10 g/cm(3)) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the order of 10

  8. The application of metal artifact reduction (MAR) in CT scans for radiation oncology by monoenergetic extrapolation with a DECT scanner.

    PubMed

    Schwahofer, Andrea; Bär, Esther; Kuchenbecker, Stefan; Grossmann, J Günter; Kachelrieß, Marc; Sterzing, Florian

    2015-12-01

    Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al=2.7 g/cm(3)), titanium (ρ Ti=4.5 g/cm(3)), steel (ρ steel=7.9 g/cm(3)) and tungsten (ρ W=19.3g/cm(3)) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV(Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ=10 g/cm(3)) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the order of 10

  9. Measured and modeled toxicokinetics in cultured fish cells and application to in vitro-in vivo toxicity extrapolation.

    PubMed

    Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman

    2014-01-01

    Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data. PMID:24647349

  10. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro.

    PubMed

    Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J; Baker, Timothy R; Troutman, John A; Hewitt, Nicola J; Goebel, Carsten

    2015-09-01

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis-Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte Km and Vmax values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and Cmax was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. PMID:26028483

  11. Upper bounds for convergence rates of vector extrapolation methods on linear systems with initial iterations. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidi, Avram; Shapira, Yair

    1992-01-01

    The application of the minimal polynomial extrapolation (MPE) and the reduced rank extrapolation (RRE) to a vector sequence obtained by the linear iterative technique x(sub j) + 1 = Ax(sub j) = b,j = 1,2,..., is considered. Both methods produce a two dimensional array of approximations s(sub n,k) to the solution of the system (I - A)x = b. Here, s(sub n,k) is obtained from the vectors x(sub j), n is less than or equal to j is less than or equal to n + k + 1. It was observed in an earlier publication by the first author that the sequence s(sub n,k), k = 1,2,..., for n greater than 0, but fixed, possesses better convergence properties than the sequence s(sub 0,k), k = 1,2,.... A detailed theoretical explanation for this phenomenon is provided in the present work. This explanation is heavily based on approximations by incomplete polynomials. It is demonstrated by numerical examples when the matrix A is sparse that cycling with s(sub n,k) for n greater than 0, but fixed, produces better convergence rates and costs less computationally than cycling with s(sub 0,k). It is also illustrated numerically with a convection-diffusion problem that the former may produce excellent results where the latter may fail completely. As has been shown in an earlier publication, the results produced by s(sub 0,k) are identical to the corresponding results obtained by applying the Arnoldi method or generalized minimal residual scheme (GMRES) to the system (I - A)x = b.

  12. Quantitative Cross-Species Extrapolation between Humans and Fish: The Case of the Anti-Depressant Fluoxetine

    PubMed Central

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.

    2014-01-01

    Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the

  13. Monitoring volcano precursory activity with the materials failure approach, using rates of cumulative seismic coda length

    SciTech Connect

    Cornelius, R.R.; Voight, B. . Dept. of Geosciences)

    1992-01-01

    The proportionality between the energy, E, of an elastic wave and the square of its amplitude led to the usage of Benioff diagrams'' for purposes of volcano monitoring. These diagrams show the time-integrated amplitude as [radical]E versus time. The authors propose to use accelerating cumulative coda length directly in volcano monitoring. This surrogate measure of energy'' is used for practical reasons, as it eliminates the intermediate step required for energy calculations with regional and instrument-specific constants. Rates of cumulative coda (s/day) may be used for the materials failure approach'' to eruption prediction, which fits data according to an empirical rate-acceleration relationship. The method allows numerical or graphical rate extrapolation towards the expected failure rate; eruption windows may be established. Rate series derived from either cumulative amplitude, cumulative coda, or calculated [radical]E can be analyzed by the materials failure approach equally well; neither series is favored by this method because of similar characteristics. They suggest rate interpolation from the time-integrated data over constant coda-increments instead of over constant time-increments. This results in an increasingly higher frequency of rate data towards the end of an accelerating time series. The end-weighted rate calculation emphasizes the latest precursory developments while it smooths noise at lower rates. Adjusting the applied constant coda-increment for rate calculations as a function of total encountered coda, is a technique for an automated and sequential update of the extrapolated failure time.

  14. 7 CFR 29.6024 - Length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Length. 29.6024 Section 29.6024 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Definitions § 29.6024 Length. The linear measurement of cured tobacco leaves from...

  15. Food-chain length and adaptive foraging.

    PubMed

    Kondoh, Michio; Ninomiya, Kunihiko

    2009-09-01

    Food-chain length, the number of feeding links from the basal species to the top predator, is a key characteristic of biological communities. However, the determinants of food-chain length still remain controversial. While classical theory predicts that food-chain length should increase with increasing resource availability, empirical supports of this prediction are limited to those from simple, artificial microcosms. A positive resource availability-chain length relationship has seldom been observed in natural ecosystems. Here, using a theoretical model, we show that those correlations, or no relationships, may be explained by considering the dynamic food-web reconstruction induced by predator's adaptive foraging. More specifically, with foraging adaptation, the food-chain length becomes relatively invariant, or even decreases with increasing resource availability, in contrast to a non-adaptive counterpart where chain length increases with increasing resource availability; and that maximum chain length more sharply decreases with resource availability either when species richness is higher or potential link number is larger. The interactive effects of resource availability, adaptability and community complexity may explain the contradictory effects of resource availability in simple microcosms and larger ecosystems. The model also explains the recently reported positive effect of habitat size on food-chain length as a result of increased species richness and/or decreased connectance owing to interspecific spatial segregation.

  16. K{sup +}K{sup +} scattering length from lattice QCD

    SciTech Connect

    Beane, Silas R.; Torok, Aaron; Luu, Thomas C.; Orginos, Kostas; Parreno, Assumpta; Savage, Martin J.; Walker-Loud, Andre

    2008-05-01

    The K{sup +}K{sup +} scattering length is calculated in fully-dynamical lattice QCD with domain-wall valence quarks on the MILC asqtad-improved gauge configurations with rooted-staggered sea quarks. Three-flavor mixed-action chiral perturbation theory at next-to-leading order, which includes the leading effects of the finite lattice spacing, is used to extrapolate the results of the lattice calculation to the physical value of m{sub K{sup +}}/f{sub K{sup +}}. We find m{sub K{sup +}}a{sub K{sup +}}{sub K{sup +}}=-0.352{+-}0.016, where the statistical and systematic errors have been combined in quadrature.

  17. Meson-Baryon Scattering Lengths from Mixed-Action Lattice QCD

    SciTech Connect

    Will Detmold, William Detmold, Konstantinos Orginos, Aaron Torok, Silas R Beane, Thomas C Luu, Assumpta Parreno, Martin Savage, Andre Walker-Loud

    2010-04-01

    The $\\pi^+\\Sigma^+$, $\\pi^+\\Xi^0$ , $K^+p$, $K^+n$, and $K^0 \\Xi^0$ scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks is used to perform the chiral extrapolations. We find no convergence for the kaon-baryon processes in the three-flavor chiral expansion. Using the two-flavor chiral expansion, we find $a_{\\pi^+\\Sigma^+} = ?0.197 ± 0.017$ fm, and $a_{\\pi^+\\Xi^0} = ?0.098 0.017$ fm, where the comprehensive error includes statistical and systematic uncertainties.

  18. Intron Length Coevolution across Mammalian Genomes

    PubMed Central

    Keane, Peter A.; Seoighe, Cathal

    2016-01-01

    Although they do not contribute directly to the proteome, introns frequently contain regulatory elements and can extend the protein coding potential of the genome through alternative splicing. For some genes, the contribution of introns to the time required for transcription can also be functionally significant. We have previously shown that intron length in genes associated with developmental patterning is often highly conserved. In general, sets of genes that require precise coordination in the timing of their expression may be sensitive to changes in transcript length. A prediction of this hypothesis is that evolutionary changes in intron length, when they occur, may be correlated between sets of coordinately expressed genes. To test this hypothesis, we analyzed intron length coevolution in alignments from nine eutherian mammals. Overall, genes that belong to the same protein complex or that are coexpressed were significantly more likely to show evidence of intron length coevolution than matched, randomly sampled genes. Individually, protein complexes involved in the cell cycle showed the strongest evidence of coevolution of intron lengths and clusters of coexpressed genes enriched for cell cycle genes also showed significant evidence of intron length coevolution. Our results reveal a novel aspect of gene coevolution and provide a means to identify genes, protein complexes and biological processes that may be particularly sensitive to changes in transcriptional dynamics. PMID:27550903

  19. Intron Length Coevolution across Mammalian Genomes.

    PubMed

    Keane, Peter A; Seoighe, Cathal

    2016-10-01

    Although they do not contribute directly to the proteome, introns frequently contain regulatory elements and can extend the protein coding potential of the genome through alternative splicing. For some genes, the contribution of introns to the time required for transcription can also be functionally significant. We have previously shown that intron length in genes associated with developmental patterning is often highly conserved. In general, sets of genes that require precise coordination in the timing of their expression may be sensitive to changes in transcript length. A prediction of this hypothesis is that evolutionary changes in intron length, when they occur, may be correlated between sets of coordinately expressed genes. To test this hypothesis, we analyzed intron length coevolution in alignments from nine eutherian mammals. Overall, genes that belong to the same protein complex or that are coexpressed were significantly more likely to show evidence of intron length coevolution than matched, randomly sampled genes. Individually, protein complexes involved in the cell cycle showed the strongest evidence of coevolution of intron lengths and clusters of coexpressed genes enriched for cell cycle genes also showed significant evidence of intron length coevolution. Our results reveal a novel aspect of gene coevolution and provide a means to identify genes, protein complexes and biological processes that may be particularly sensitive to changes in transcriptional dynamics. PMID:27550903

  20. Zero-point length from string fluctuations

    NASA Astrophysics Data System (ADS)

    Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.

    2006-02-01

    One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a "virtual memory" of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., √{α‧}. Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory.

  1. Minimization of dependency length in written English.

    PubMed

    Temperley, David

    2007-11-01

    Gibson's Dependency Locality Theory (DLT) [Gibson, E. 1998. Linguistic complexity: locality of syntactic dependencies. Cognition, 68, 1-76; Gibson, E. 2000. The dependency locality theory: A distance-based theory of linguistic complexity. In A. Marantz, Y. Miyashita, & W. O'Neil (Eds.), Image, Language, Brain (pp. 95-126). Cambridge, MA: MIT Press.] proposes that the processing complexity of a sentence is related to the length of its syntactic dependencies: longer dependencies are more difficult to process. The DLT is supported by a variety of phenomena in language comprehension. This raises the question: Does language production reflect a preference for shorter dependencies as well? I examine this question in a corpus study of written English, using the Wall Street Journal portion of the Penn Treebank. The DLT makes a number of predictions regarding the length of constituents in different contexts; these predictions were tested in a series of statistical tests. A number of findings support the theory: the greater length of subject noun phrases in inverted versus uninverted quotation constructions, the greater length of direct-object versus subject NPs, the greater length of postmodifying versus premodifying adverbial clauses, the greater length of relative-clause subjects within direct-object NPs versus subject NPs, the tendency towards "short-long" ordering of postmodifying adjuncts and coordinated conjuncts, and the shorter length of subject NPs (but not direct-object NPs) in clauses with premodifying adjuncts versus those without.

  2. Contact transfer length investigation of a 2D nanoparticle network by scanning probe microscopy.

    PubMed

    Ruiz-Vargas, Carlos S; Reissner, Patrick A; Wagner, Tino; Wyss, Roman M; Park, Hyung Gyu; Stemmer, Andreas

    2015-09-11

    Nanoparticle network devices find growing application in sensing and electronics. One recurring challenge in the design and fabrication of this class of devices is ensuring a stable interface via robust yet unobstructive electrodes. A figure of merit which dictates the minimum electrode overlap required for optimal charge injection into the network is the contact transfer length. However, we find that traditional contact characterization using the transmission line model, an indirect method which requires extrapolation, is insufficient for network devices. Instead, we apply Kelvin probe force microscopy to characterize the contact resistance by imaging the surface potential with nanometer resolution. We then use scanning probe lithography to directly investigate the contact transfer length. We have determined the transfer length in graphene contacted devices to be 200-400 nm, thus apt for further device reduction which is often necessary for on-site sensing applications. Simulations from a two-dimensional resistor model support our observations and are expected to be an important tool for further optimizing the design of nanoparticle-based devices. PMID:26291069

  3. Automatic Control Of Length Of Welding Arc

    NASA Technical Reports Server (NTRS)

    Iceland, William F.

    1991-01-01

    Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.

  4. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  5. Off-shell extrapolation of Regge-model NN scattering amplitudes describing final state interactions in 2H(e,e'p)

    DOE PAGES

    Ford, William Paul; van Orden, Wally

    2013-11-25

    In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.

  6. Off-shell extrapolation of Regge-model NN scattering amplitudes describing final state interactions in 2H(e,e'p)

    SciTech Connect

    Ford, William Paul; van Orden, Wally

    2013-11-25

    In this work, an off-shell extrapolation is proposed for the Regge-model NN amplitudes presented in a paper by Ford and Van Orden [ Phys. Rev. C 87 014004 (2013)] and in an eprint by Ford (arXiv:1310.0871 [nucl-th]). The prescriptions for extrapolating these amplitudes for one nucleon off-shell in the initial state are presented. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.

  7. J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.

    1991-01-01

    Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The

  8. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro

    SciTech Connect

    Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J.; Baker, Timothy R.; Troutman, John A.; Hewitt, Nicola J.; Goebel, Carsten

    2015-09-01

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human

  9. Extrapolations and prognostications

    NASA Astrophysics Data System (ADS)

    Swartz, Clifford E.

    2000-01-01

    It's that time of millennium again when we look to the past and prophesy about the future. My own family memories don't go much past the 20th century, although my wife is a direct descendant of a woman convicted of witchcraft in Salem. Not recently, of course. A century ago, my mother was going to school in a one-room schoolhouse with student desks in rows and a blackboard up front for the teacher. In high school she studied physics but there were no student laboratory exercises and only a few demonstrations by the teacher. She didn't like it.

  10. Method of continuously determining crack length

    NASA Technical Reports Server (NTRS)

    Prabhakaran, Ramamurthy (Inventor); Lopez, Osvaldo F. (Inventor)

    1993-01-01

    The determination of crack lengths in an accurate and straight forward manner is very useful in studying and preventing load created flaws and cracks. A crack length sensor according to the present invention is fabricated in a rectangular or other geometrical form from a conductive powder impregnated polymer material. The long edges of the sensor are silver painted on both sides and the sensor is then bonded to a test specimen via an adhesive having sufficient thickness to also serve as an insulator. A lead wire is connected to each of the two outwardly facing silver painted edges. The resistance across the sensor changes as a function of the crack length in the specimen and sensor. The novel aspect of the present invention includes the use of relatively uncomplicated sensors and instrumentation to effectively measure the length of generated cracks.

  11. Mixing lengths scaling in a gravity flow

    SciTech Connect

    Ecke, Robert E; Rivera, Micheal; Chen, Jun; Ecke, Robert E

    2009-01-01

    We present an experimental study of the mixing processes in a gravity current. The turbulent transport of momentum and buoyancy can be described in a very direct and compact form by a Prandtl mixing length model [1]: the turbulent vertical fluxes of momentum and buoyancy are found to scale quadraticatly with the vertical mean gradients of velocity and density. The scaling coefficient is the square of the mixing length, approximately constant over the mixing zone of the stratified shear layer. We show in this paper how, in different flow configurations, this length can be related to the shear length of the flow {radical}({var_epsilon}/{partial_derivative}{sub z}u{sup 3}).

  12. Carbon Nanotubes: Measuring Dispersion and Length

    SciTech Connect

    Fagan, Jeffrey A.; Bauer, Barry J.; Hobbie, Erik K.; Becker, Matthew L.; Hight-Walker, Angela; Simpson, Jeffrey R.; Chun, Jaehun; Obrzut, Jan; Bajpai, Vardhan; Phelan, Fred R.; Simien, Daneesh; Yeon Huh, Ji; Migler, Kalman B.

    2011-03-01

    Advanced technological uses of single-wall carbon nanotubes (SWCNTs) rely on the production of single length and chirality populations that are currently only available through liquid phase post processing. The foundation of all of these processing steps is the attainment of individualized nanotube dispersion in solution; an understanding of the collodial properties of the dispersed SWCNTs can then be used to designed appropriate conditions for separations. In many instances nanotube size, particularly length, is especially active in determining the achievable properties from a given population, and thus there is a critical need for measurement technologies for both length distribution and effective separation techniques. In this Progress Report, we document the current state of the art for measuring dispersion and length populations, including separations, and use examples to demonstrate the desirability of addressing these parameters.

  13. Electron Effective-Attenuation-Length Database

    National Institute of Standards and Technology Data Gateway

    SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge)   This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).

  14. Segment lengths influence hill walking strategies.

    PubMed

    Sheehan, Riley C; Gottschall, Jinger S

    2014-08-22

    Segment lengths are known to influence walking kinematics and muscle activity patterns. During level walking at the same speed, taller individuals take longer, slower strides than shorter individuals. Based on this, we sought to determine if segment lengths also influenced hill walking strategies. We hypothesized that individuals with longer segments would display more joint flexion going uphill and more extension going downhill as well as greater lateral gastrocnemius and vastus lateralis activity in both directions. Twenty young adults of varying heights (below 155 cm to above 188 cm) walked at 1.25 m/s on a level treadmill as well as 6° and 12° up and downhill slopes while we collected kinematic and muscle activity data. Subsequently, we ran linear regressions for each of the variables with height, leg, thigh, and shank length. Despite our population having twice the anthropometric variability, the level and hill walking patterns matched closely with previous studies. While there were significant differences between level and hill walking, there were few hill walking variables that were correlated with segment length. In support of our hypothesis, taller individuals had greater knee and ankle flexion during uphill walking. However, the majority of the correlations were between tibialis anterior and lateral gastrocnemius activities and shank length. Contrary to our hypothesis, relative step length and muscle activity decreased with segment length, specifically shank length. In summary, it appears that individuals with shorter segments require greater propulsion and toe clearance during uphill walking as well as greater braking and stability during downhill walking. PMID:24968942

  15. Cold bose gases with large scattering lengths.

    PubMed

    Cowell, S; Heiselberg, H; Mazets, I E; Morales, J; Pandharipande, V R; Pethick, C J

    2002-05-27

    We calculate the energy and condensate fraction for a dense system of bosons interacting through an attractive short range interaction with positive s-wave scattering length a. At high densities n>a(-3), the energy per particle, chemical potential, and square of the sound speed are independent of the scattering length and proportional to n(2/3), as in Fermi systems. The condensate is quenched at densities na(3) approximately 1. PMID:12059466

  16. Fragment Length of Circulating Tumor DNA

    PubMed Central

    Underhill, Hunter R.; Kitzman, Jacob O.; Hellwig, Sabine; Welker, Noah C.; Daza, Riza; Gligorich, Keith M.; Rostomily, Robert C.; Shendure, Jay

    2016-01-01

    Malignant tumors shed DNA into the circulation. The transient half-life of circulating tumor DNA (ctDNA) may afford the opportunity to diagnose, monitor recurrence, and evaluate response to therapy solely through a non-invasive blood draw. However, detecting ctDNA against the normally occurring background of cell-free DNA derived from healthy cells has proven challenging, particularly in non-metastatic solid tumors. In this study, distinct differences in fragment length size between ctDNAs and normal cell-free DNA are defined. Human ctDNA in rat plasma derived from human glioblastoma multiforme stem-like cells in the rat brain and human hepatocellular carcinoma in the rat flank were found to have a shorter principal fragment length than the background rat cell-free DNA (134–144 bp vs. 167 bp, respectively). Subsequently, a similar shift in the fragment length of ctDNA in humans with melanoma and lung cancer was identified compared to healthy controls. Comparison of fragment lengths from cell-free DNA between a melanoma patient and healthy controls found that the BRAF V600E mutant allele occurred more commonly at a shorter fragment length than the fragment length of the wild-type allele (132–145 bp vs. 165 bp, respectively). Moreover, size-selecting for shorter cell-free DNA fragment lengths substantially increased the EGFR T790M mutant allele frequency in human lung cancer. These findings provide compelling evidence that experimental or bioinformatic isolation of a specific subset of fragment lengths from cell-free DNA may improve detection of ctDNA. PMID:27428049

  17. Process for fabricating continuous lengths of superconductor

    DOEpatents

    Kroeger, Donald M.; List, III, Frederick A.

    1998-01-01

    A process for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor precursor between said first substrate ribbon and said second substrates ribbon. The layered superconductor precursor is then heat treated to form a super conductor layer.

  18. Dynamical Length-Regulation of Microtubules

    NASA Astrophysics Data System (ADS)

    Melbinger, Anna; Reese, Louis; Frey, Erwin

    2012-02-01

    Microtubules (MTs) are vital constituents of the cytoskeleton. These stiff filaments are not only needed for mechanical support. They also fulfill highly dynamic tasks. For instance MTs build the mitotic spindle, which pulls the doubled set of chromosomes apart during mitosis. Hence, a well-regulated and adjustable MT length is essential for cell division. Extending a recently introduced model [1], we here study length-regulation of MTs. Thereby we account for both spontaneous polymerization and depolymerization triggered by motor proteins. In contrast to the polymerization rate, the effective depolymerization rate depends on the presence of molecular motors at the tip and thereby on crowding effects which in turn depend on the MT length. We show that these antagonistic effects result in a well-defined MT length. Stochastic simulations and analytic calculations reveal the exact regimes where regulation is feasible. Furthermore, the adjusted MT length and the ensuing strength of fluctuations are analyzed. Taken together, we make quantitative predictions which can be tested experimentally. These results should help to obtain deeper insights in the microscopic mechanisms underlying length-regulation. [4pt] [1] L.Reese, A.Melbinger, E.Frey, Biophys. J., 101, 9, 2190 (2011)

  19. Extrapolation of IAPWS-IF97 data: The saturation pressure of H2O in the critical region

    NASA Astrophysics Data System (ADS)

    Ustyuzhanin, E. E.; Ochkov, V. F.; Shishakov, V. V.; Rykov, A. V.

    2015-11-01

    Some literature sources and web sites are analyzed in this report. These sources contain an information about thermophysical properties of H2O including the vapor pressure Ps. (Ps,T)-data have a form of the international standard tables named as “IAPWS-IF97 data”. Our analysis shows that traditional databases represent (Ps,T)-data at t > 0.002, here t = (Tc - T)/Tc is a reduced temperature. It is an interesting task to extrapolate IAPWS-IF97 data in to the critical region and to get (Ps,T)-data at t < 0.002. We have considered some equations Ps(t) and estimated that previous models do not follow to the degree laws of the scaling theory (ST). A combined model (CM) is chosen as a form, F(t,D,B), to express a function ln(Ps/Pc) in the critical region including t < 0.002, here D = (α, Pc,Tc,...) are critical characteristics, B are adjustable coefficients. CM has a combined structure with scaling and regular parts. The degree laws of ST are taken into account to elaborate F(t, D, B). Adjustable coefficients (B) are determined by fitting CM to input (Ps,T)-points those belong to IAPWS-IF97 data. Application results are got with a help of CM in the critical region including values of the first and the second derivatives for Ps(T). Some models Ps(T) are compared with CM.

  20. A covariant extrapolation of the noncovariant two particle Wheeler-Feynman Hamiltonian from the Todorov equation and Dirac's constraint mechanics

    NASA Astrophysics Data System (ADS)

    Crater, Horace; Yang, Dujiu

    1991-09-01

    A semirelativistic expansion in powers of 1/c2 is canonically matched through order (1/c4) of the two-particle total Hamiltonian of Wheeler-Feynman vector and scalar electrodynamics to a similar expansion of the center of momentum (c.m.) total energy of two interacting particles obtained from covariant generalized mass shell constraints derived with the use of the classical Todorov equation and Dirac's Hamiltonian constraint mechanics. This determines through order 1/c4 the direct interaction used in the covariant Todorov constraint equation. We show that these interactions are momentum independent in spite of the extensive and complicated momentum dependence of the potential energy terms in the Wheeler-Feynman Hamiltonian. The invariant expressions for the relativistic reduced mass and energy of the fictitious particle of relative motion used in the Todorov equation are also dynamically determined through this order by this same procedure. The resultant covariant Todorov equation then not only reproduces the noncovariant Wheeler-Feynman dynamics through order 1/c4 but also implicitly provides a rather simple covariant extrapolation of it to all orders of 1/c2.

  1. Emissions of sulfur gases from marine and freshwater wetlands of the Florida Everglades: Rates and extrapolation using remote sensing

    NASA Technical Reports Server (NTRS)

    Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.

    1992-01-01

    Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of greater than 500 nmol/m(sup -2)h(sup -1) occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of approximately 60 nmol/m(sup -2)h(sup -1) which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat thematic mapper were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emission were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities (42 percent) accounted for approximately 24 percent of the total S emissions.

  2. Enhanced Confinement Scenarios Without Large Edge Localized Modes in Tokamaks: Control, Performance, and Extrapolability Issues for ITER

    SciTech Connect

    Maingi, R

    2014-07-01

    Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.

  3. Emissions of sulfur gases from marine and freshwater wetlands of the Florida Everglades - Rates and extrapolation using remote sensing

    NASA Technical Reports Server (NTRS)

    Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.

    1993-01-01

    Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of over 500 nmol/sq m/h occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of about 60 nmol/sq m/h, which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat TM were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emissions were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities accounted for about 24 percent of the total S emissions.

  4. Enhanced confinement scenarios without large edge localized modes in tokamaks: control, performance, and extrapolability issues for ITER

    NASA Astrophysics Data System (ADS)

    Maingi, R.

    2014-11-01

    Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. Two strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R&D. In addition, recent progress in ELM-free regimes, namely quiescent H-mode, I-mode, and enhanced pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.

  5. Geographic bias of field observations of soil carbon stocks with tropical land-use changes precludes spatial extrapolation.

    PubMed

    Powers, Jennifer S; Corre, Marife D; Twine, Tracy E; Veldkamp, Edzo

    2011-04-12

    Accurately quantifying changes in soil carbon (C) stocks with land-use change is important for estimating the anthropogenic fluxes of greenhouse gases to the atmosphere and for implementing policies such as REDD (Reducing Emissions from Deforestation and Degradation) that provide financial incentives to reduce carbon dioxide fluxes from deforestation and land degradation. Despite hundreds of field studies and at least a dozen literature reviews, there is still considerable disagreement on the direction and magnitude of changes in soil C stocks with land-use change. We conducted a meta-analysis of studies that quantified changes in soil C stocks with land use in the tropics. Conversion from one land use to another caused significant increases or decreases in soil C stocks for 8 of the 14 transitions examined. For the three land-use transitions with sufficient observations, both the direction and magnitude of the change in soil C pools depended strongly on biophysical factors of mean annual precipitation and dominant soil clay mineralogy. When we compared the distribution of biophysical conditions of the field observations to the area-weighted distribution of those factors in the tropics as a whole or the tropical lands that have undergone conversion, we found that field observations are highly unrepresentative of most tropical landscapes. Because of this geographic bias we strongly caution against extrapolating average values of land-cover change effects on soil C stocks, such as those generated through meta-analysis and literature reviews, to regions that differ in biophysical conditions.

  6. Extrapolation of the dna fragment-size distribution after high-dose irradiation to predict effects at low doses

    NASA Technical Reports Server (NTRS)

    Ponomarev, A. L.; Cucinotta, F. A.; Sachs, R. K.; Brenner, D. J.; Peterson, L. E.

    2001-01-01

    The patterns of DSBs induced in the genome are different for sparsely and densely ionizing radiations: In the former case, the patterns are well described by a random-breakage model; in the latter, a more sophisticated tool is needed. We used a Monte Carlo algorithm with a random-walk geometry of chromatin, and a track structure defined by the radial distribution of energy deposition from an incident ion, to fit the PFGE data for fragment-size distribution after high-dose irradiation. These fits determined the unknown parameters of the model, enabling the extrapolation of data for high-dose irradiation to the low doses that are relevant for NASA space radiation research. The randomly-located-clusters formalism was used to speed the simulations. It was shown that only one adjustable parameter, Q, the track efficiency parameter, was necessary to predict DNA fragment sizes for wide ranges of doses. This parameter was determined for a variety of radiations and LETs and was used to predict the DSB patterns at the HPRT locus of the human X chromosome after low-dose irradiation. It was found that high-LET radiation would be more likely than low-LET radiation to induce additional DSBs within the HPRT gene if this gene already contained one DSB.

  7. Emissions of sulfur gases from marine and freshwater wetlands of the Florida Everglades: Rates and extrapolation using remote sensing

    NASA Astrophysics Data System (ADS)

    Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.

    1993-05-01

    Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of >500 nmol m-2 h-1 occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of ˜60 nmol m-2 h-1 which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat thematic mapper were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emissions were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities (42%) accounted for ˜24% of the total S emissions.

  8. Including higher energy data in the R-matrix extrapolation of 12C(α , γ) 16O

    NASA Astrophysics Data System (ADS)

    Deboer, R.; Uberseder, E.; Azuma, R. E.; Best, A.; Brune, C.; Goerres, J.; Sayre, D.; Smith, K.; Wiescher, M.

    2015-10-01

    The phenomenological R-matrix technique has proved to be very successful in describing the cross sections of interest to nuclear astrophysics. One of the key reactions is 12C(α , γ) 16O, which has frequently been analyzed using R-matrix but usually over a limited energy range. This talk will present an analysis that, for the first time, extends above the proton and α1 separation energies taking advantage of a large amount of additional data. The analysis uses the new publicly released JINA R-matrix code AZURE2. The traditional reaction channels of 12C(α , γ) 16O, 12C(α ,α0) 12, and 16N(βα) 12C are included but are now accompanied by the higher energy reactions. By explicitly including higher energy levels, the uncertainty in the extrapolation of the cross section is significantly reduced. This is accomplished by more stringent constraints on interference combination and background poles by the additional higher energy data and by considering new information about subthresold states from transfer reactions. The result is the most comprehensive R-matrix analysis of the 12C(α , γ) 16O reaction to date. This research was supported in part by the ND CRC and funded by the NSF through Grant No. Phys-0758100, and JINA through Grant No. Phys-0822648.

  9. Application of the EXtrapolated Efficiency Method (EXEM) to infer the gamma-cascade detection efficiency in the actinide region

    NASA Astrophysics Data System (ADS)

    Ducasse, Q.; Jurado, B.; Mathieu, L.; Marini, P.; Morillon, B.; Aiche, M.; Tsekhanovich, I.

    2016-08-01

    The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the 238U(d,p)239U and 238U(3He,d)239Np reactions. We have performed Hauser-Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of 239Np below the neutron separation energy allowed us to validate the EXEM.

  10. Searching for inflationary B modes: can dust emission properties be extrapolated from 350 GHz to 150 GHz?

    NASA Astrophysics Data System (ADS)

    Tassis, Konstantinos; Pavlidou, Vasiliki

    2015-07-01

    Recent Planck results have shown that radiation from the cosmic microwave background passes through foregrounds in which aligned dust grains produce polarized dust emission, even in regions of the sky with the lowest level of dust emission. One of the most commonly used ways to remove the dust foreground is to extrapolate the polarized dust emission signal from frequencies where it dominates (e.g. ˜350 GHz) to frequencies commonly targeted by cosmic microwave background experiments (e.g. ˜150 GHz). In this Letter, we describe an interstellar medium effect that can lead to decorrelation of the dust emission polarization pattern between different frequencies due to multiple contributions along the line of sight. Using a simple 2-cloud model we show that there are two conditions under which this decorrelation can be large: (a) the ratio of polarized intensities between the two clouds changes between the two frequencies; (b) the magnetic fields between the two clouds contributing along a line of sight are significantly misaligned. In such cases, the 350 GHz polarized sky map is not predictive of that at 150 GHz. We propose a possible correction for this effect, using information from optopolarimetric surveys of dichroicly absorbed starlight.

  11. Precision measurements of ionization and dissociation energies by extrapolation of Rydberg series: from H2 to larger molecules.

    PubMed

    Sprecher, D; Beyer, M; Merkt, F

    2013-01-01

    Recent experiments are reviewed which have led to the determination of the ionization and dissociation energies of molecular hydrogen with a precision of 0.0007 cm(-)1 (8 mJ/mol or 20 MHz) using a procedure based on high-resolution spectroscopic measurements of high Rydberg states and the extrapolation of the Rydberg series to the ionization thresholds. Molecular hydrogen, with only two protons and two electrons, is the simplest molecule with which all aspects of a chemical bond, including electron correlation effects, can be studied. Highly precise values of its ionization and dissociation energies provide stringent tests of the precision of molecular quantum mechanics and of quantum-electrodynamics calculations in molecules. The comparison of experimental and theoretical values for these quantities enable one to quantify the contributions to a chemical bond that are neglected when making the Born-Oppenheimer approximation, i.e. adiabatic, nonadiabatic, relativistic, and radiative corrections. Ionization energies of a broad range of molecules can now be determined experimentally with high accuracy (i.e. about 0.01 cm(-1)). Calculations at similar accuracies are extremely challenging for systems containing more than two electrons. The combination of precision measurements of molecular ionization energies with highly accurateab initio calculations has the potential to provide, in future, fully reliable sets of thermochemical quantities for gas-phase reactions. PMID:23967701

  12. CDNA CLONING OF FATHEAD MINNOW (PIMEPHALES PROMELAS) ESTROGEN AND ANDROGEN RECEPTORS FOR USE IN STEROID RECEPTOR EXTRAPOLATION STUDIES FOR ENDOCRINE DISRUPTING CHEMICALS

    EPA Science Inventory

    cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals.

    Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...

  13. Molecular target sequence similarity as a basis for species extrapolation to assess the ecological risk of chemicals with known modes of action

    EPA Science Inventory

    In practice, it is neither feasible nor ethical to conduct toxicity tests with all species that may be impacted by chemical exposures. Therefore, cross-species extrapolation is fundamental to human health and ecological risk assessment. The extensive chemical universe for which w...

  14. In vitro-in vivo Quantitative Extrapolation of Hepatic Metabolism Data for Fish: A Review of Methods and Strategies for Incorporating Intrinsic Clearance Estimates into Chemical Kinetic Models

    EPA Science Inventory

    The approaches described in this paper will substantially improve risk assessments for compounds that undergo biotransformation. The purpose of this report is to review methods used by mammalian researchers to perform in vitro-in vivo metabolism extrapolations, discuss how these ...

  15. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    SciTech Connect

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang; Feng, Xueshang E-mail: wus@uah.edu E-mail: fengx@spaceweather.ac.cn

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  16. Extrapolating the Acute Behavioral Effects of Toluene from 1-Hour to 24-Hour Exposures in Rats: Roles of Dose Metric, and Metabolic and Behavioral Tolerance.

    EPA Science Inventory

    Recent research on the acute effects of volatile organic compounds (VQCs) suggests that extrapolation from short (~ 1 h) to long durations (up to 4 h) may be improved by using estimates of brain toluene concentration (Br[Tol]) instead of cumulative inhaled dose (C x t) as a metri...

  17. State-of-the-Science Workshop Report: Issues and Approaches in Low Dose–Response Extrapolation for Environmental Health Risk Assessment

    EPA Science Inventory

    Low-dose extrapolation model selection for evaluating the health effects of environmental pollutants is a key component of the risk assessment process. At a workshop held in Baltimore, MD, on April 23-24, 2007, and sponsored by U.S. Environmental Protection Agency (EPA) and Johns...

  18. Insomnia and Telomere Length in Older Adults

    PubMed Central

    Carroll, Judith E.; Esquivel, Stephanie; Goldberg, Alyssa; Seeman, Teresa E.; Effros, Rita B.; Dock, Jeffrey; Olmstead, Richard; Breen, Elizabeth C.; Irwin, Michael R.

    2016-01-01

    Study Objectives: Insomnia, particularly in later life, may raise the risk for chronic diseases of aging and mortality through its effect on cellular aging. The current study examines the effects of insomnia on telomere length, a measure of cellular aging, and tests whether insomnia interacts with chronological age to increase cellular aging. Methods: A total of 126 males and females (60–88 y) were assessed for insomnia using the Diagnostic and Statistical Manual IV criterion for primary insomnia and the International Classification of Sleep Disorders, Second Edition for general insomnia (45 insomnia cases; 81 controls). Telomere length in peripheral blood mononuclear cells (PBMC) was determined using real-time quantitative polymerase chain reaction (qPCR) methodology. Results: In the analysis of covariance model adjusting for body mass index and sex, age (60–69 y versus 70–88 y) and insomnia diagnosis interacted to predict shorter PBMC telomere length (P = 0.04). In the oldest age group (70–88 y), PBMC telomere length was significantly shorter in those with insomnia, mean (standard deviation) M(SD) = 0.59(0.2) compared to controls with no insomnia M(SD) = 0.78(0.4), P = 0.04. In the adults aged 60–69 y, PBMC telomere length was not different between insomnia cases and controls, P = 0.44. Conclusions: Insomnia is associated with shorter PBMC telomere length in adults aged 70–88 y, but not in those younger than 70 y, suggesting that clinically severe sleep disturbances may increase cellular aging, especially in the later years of life. These findings highlight insomnia as a vulnerability factor in later life, with implications for risk for diseases of aging. Citation: Carroll JE, Esquivel S, Goldberg A, Seeman TE, Effros RB, Dock J, Olmstead R, Breen EC, Irwin MR. Insomnia and telomere length in older adults. SLEEP 2016;39(3):559–564. PMID:26715231

  19. Gross anatomical study of spleenic length.

    PubMed

    Chowdhury, Ashraful Islam; Khalil, Mansur; Begum, Jahan Ara; Rahman, M Habibur; Mannan, Sabina; Sultana, Seheli Zannat; Rahman, M Mahbubur; Ahamed, M Sshibbir; Sultana, Zinat Rezina

    2009-01-01

    The aim of the present study was to establish the standard length of the normal spleen in Bangladeshi people. One hundred and twenty human cadavers of which eighty-seven male and thirty-three female were dissected to remove spleen with associated structures in the morgue of Forensic Medicine Department of Mymensingh Medical College. Collected specimens were tagged with specific identification number, divided into five groups according to age and height of the individual. Gross and fine dissections were carried out after fixing the specimen in 10% formol saline solution. Length of the spleen was measured by measuring tape and expressed in cm and findings of the present study were compared with the findings of national and international studies. This was a cross sectional descriptive study carried out in the Department of Anatomy of Mymensingh Medical College, Mymensingh. The mean length of spleen was maximum as 11.20 cm in male in group C (31-45 years), and as 11.80 cm in female in group B(16-30 years) and mean length of spleen was minimum as 10.06 cm in male and 9.53 cm in female in group A (upto 15 years). Difference between group A and B, A and C, A and D were statistically significant. There were no significant differences in between other groups. According to height of individual the mean length of spleen was maximum 11.42 cm in 165.01 to 180 cm height group and minimum 10.30 cm in 0-120 cm height group which indicate that length of the spleen increases with height of the individual. This was observed that length of the spleen depends on the age, sex and body height of the individual. PMID:19377429

  20. Explaining the length threshold of polyglutamine aggregation

    NASA Astrophysics Data System (ADS)

    De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa

    2012-06-01

    The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease.

  1. Delayed Feedback Model of Axonal Length Sensing

    PubMed Central

    Karamched, Bhargav R.; Bressloff, Paul C.

    2015-01-01

    A fundamental question in cell biology is how the sizes of cells and organelles are regulated at various stages of development. Size homeostasis is particularly challenging for neurons, whose axons can extend from hundreds of microns to meters (in humans). Recently, a molecular-motor-based mechanism for axonal length sensing has been proposed, in which axonal length is encoded by the frequency of an oscillating retrograde signal. In this article, we develop a mathematical model of this length-sensing mechanism in which advection-diffusion equations for bidirectional motor transport are coupled to a chemical signaling network. We show that chemical oscillations emerge due to delayed negative feedback via a Hopf bifurcation, resulting in a frequency that is a monotonically decreasing function of axonal length. Knockdown of either kinesin or dynein causes an increase in the oscillation frequency, suggesting that the length-sensing mechanism would produce longer axons, which is consistent with experimental findings. One major prediction of the model is that fluctuations in the transport of molecular motors lead to a reduction in the reliability of the frequency-encoding mechanism for long axons. PMID:25954897

  2. Influence of mandibular length on mouth opening.

    PubMed

    Dijkstra, P U; Hof, A L; Stegenga, B; de Bont, L G

    1999-02-01

    Theoretically, mouth opening not only reflects the mobility of the temporomandibular joints (TMJs) but also the mandibular length. Clinically, the exact relationship between mouth opening, mandibular length, and mobility of TMJs is unclear. To study this relationship 91 healthy subjects, 59 women and 32 men (mean age 27.2 years, s.d. 7.5 years, range 13-56 years) were recruited from the patients of the Department of Oral and Maxillofacial Surgery of University Hospital, Groningen. Mouth opening, mobility of TMJs and mandibular length were measured. The mobility of TMJs was measured as the angular displacement of the mandible relative to the cranium, the angle of mouth opening (AMO). Mouth opening (MO) correlated significantly with mandibular length (ML) (r = 0.36) and AMO (r = 0.66). The regression equation MO = C1 x ML x AMO + C2, in which C = 0.53 and C2 = 25.2 mm, correlated well (r = 0.79) with mouth opening. It is concluded that mouth opening reflects both mobility of the TMJs and mandibular length. PMID:10080308

  3. Tactile length contraction as Bayesian inference.

    PubMed

    Tong, Jonathan; Ngo, Vy; Goldreich, Daniel

    2016-08-01

    To perceive, the brain must interpret stimulus-evoked neural activity. This is challenging: The stochastic nature of the neural response renders its interpretation inherently uncertain. Perception would be optimized if the brain used Bayesian inference to interpret inputs in light of expectations derived from experience. Bayesian inference would improve perception on average but cause illusions when stimuli violate expectation. Intriguingly, tactile, auditory, and visual perception are all prone to length contraction illusions, characterized by the dramatic underestimation of the distance between punctate stimuli delivered in rapid succession; the origin of these illusions has been mysterious. We previously proposed that length contraction illusions occur because the brain interprets punctate stimulus sequences using Bayesian inference with a low-velocity expectation. A novel prediction of our Bayesian observer model is that length contraction should intensify if stimuli are made more difficult to localize. Here we report a tactile psychophysical study that tested this prediction. Twenty humans compared two distances on the forearm: a fixed reference distance defined by two taps with 1-s temporal separation and an adjustable comparison distance defined by two taps with temporal separation t ≤ 1 s. We observed significant length contraction: As t was decreased, participants perceived the two distances as equal only when the comparison distance was made progressively greater than the reference distance. Furthermore, the use of weaker taps significantly enhanced participants' length contraction. These findings confirm the model's predictions, supporting the view that the spatiotemporal percept is a best estimate resulting from a Bayesian inference process.

  4. Chromosome-length polymorphism in fungi.

    PubMed Central

    Zolan, M E

    1995-01-01

    The examination of fungal chromosomes by pulsed-field gel electrophoresis has revealed that length polymorphism is widespread in both sexual and asexual species. This review summarizes characteristics of fungal chromosome-length polymorphism and possible mitotic and meiotic mechanisms of chromosome length change. Most fungal chromosome-length polymorphisms are currently uncharacterized with respect to content and origin. However, it is clear that long tandem repeats, such as tracts of rRNA genes, are frequently variable in length and that other chromosomal rearrangements are suppressed during normal mitotic growth. Dispensable chromosomes and dispensable chromosome regions, which have been well documented for some fungi, also contribute to the variability of the fungal karyotype. For sexual species, meiotic recombination increases the overall karyotypic variability in a population while suppressing genetic translocations. The range of karyotypes observed in fungi indicates that many karyotypic changes may be genetically neutral, at least under some conditions. In addition, new linkage combinations of genes may also be advantageous in allowing adaptation of fungi to new environments. PMID:8531892

  5. The evolution mechanism of intron length.

    PubMed

    Zhang, Qiang; Li, Hong; Zhao, Xiao-Qing; Xue, Hui; Zheng, Yan; Meng, Hu; Jia, Yun; Bo, Su-Ling

    2016-08-01

    Within two years of their discovery in 1977, introns were found to have a positive effect on gene expression. Our result shows that introns can achieve gene expression and regulation through interaction with corresponding mRNA sequences. On the base of Smith-Waterman method, local comparing helps us get the optimal matched segments between intron sequences and mRNA sequences. Studying the distribution regulation of the optimal matching region on intron sequences of ribosomal protein genes about 27 species, we find that the intron length evolution processes beginning from 5' end to 3' end and increasing one by one structural unit, which comes up with a possible mechanism for the intron length evolution. The intron of structure units is conservative with about 60bp length, but the length of linker sequence between structure units changes a lot. Interestingly, distributions of the length and matching rate of optimal matched segments are consistent with sequence features of miRNA and siRNA. These results indicate that the interaction between intron sequences and mRNA sequences is a kind of functional RNA-RNA interaction. Meanwhile, the two kinds of sequences above are co-evolved and interactive to play their functions. PMID:27449197

  6. Functional scoliosis caused by leg length discrepancy

    PubMed Central

    Daniszewska, Barbara; Zolynski, Krystian

    2010-01-01

    Introduction Leg length discrepancy (LLD) causes pelvic obliquity in the frontal plane and lumbar scoliosis with convexity towards the shorter extremity. Leg length discrepancy is observed in 3-15% of the population. Unequalized lower limb length discrepancy leads to posture deformation, gait asymmetry, low back pain and discopathy. Material and methods In the years 1998-2006, 369 children, aged 5 to 17 years (209 girls, 160 boys) with LLD-related functional scoliosis were treated. An external or internal shoe lift was applied. Results Among 369 children the discrepancy of 0.5 cm was observed in 27, 1 cm in 329, 1.5 cm in 9 and 2 cm in 4 children. During the first follow-up examination, within 2 weeks, the adjustment of the spine to new static conditions was noted and correction of the curve in 316 examined children (83.7%). In 53 children (14.7%) the correction was observed later and was accompanied by slight low back pain. The time needed for real equalization of limbs was 3 to 24 months. The time needed for real equalization of the discrepancy was 11.3 months. Conclusions Leg length discrepancy equalization results in elimination of scoliosis. Leg length discrepancy < 2 cm is a static disorder; that is why measurements should be performed in a standing position using blocks of adequate thickness and the position of the posterior superior iliac spine should be estimated. PMID:22371777

  7. Altered Maxwell equations in the length gauge

    NASA Astrophysics Data System (ADS)

    Reiss, H. R.

    2013-09-01

    The length gauge uses a scalar potential to describe a laser field, thus treating it as a longitudinal field rather than as a transverse field. This distinction is manifested by the fact that the Maxwell equations that relate to the length gauge are not the same as those for transverse fields. In particular, a source term is necessary in the length-gauge Maxwell equations, whereas the Coulomb-gauge description of plane waves possesses the basic property of transverse fields that they propagate with no source terms at all. This difference is shown to be importantly consequential in some previously unremarked circumstances; and it explains why the Göppert-Mayer gauge transformation does not provide the security that might be expected of full gauge equivalence.

  8. Polymers for gene delivery across length scales

    NASA Astrophysics Data System (ADS)

    Putnam, David

    2006-06-01

    A number of human diseases stem from defective genes. One approach to treating such diseases is to replace, or override, the defective genes with normal genes, an approach called 'gene therapy'. However, the introduction of correctly functioning DNA into cells is a non-trivial matter, and cells must be coaxed to internalize, and then use, the DNA in the desired manner. A number of polymer-based synthetic systems, or 'vectors', have been developed to entice cells to use exogenous DNA. These systems work across the nano, micro and macro length scales, and have been under continuous development for two decades, with varying degrees of success. The design criteria for the construction of more-effective delivery vectors at each length scale are continually evolving. This review focuses on the most recent developments in polymer-based vector design at each length scale.

  9. Particle Swarm Optimization with Dynamic Step Length

    NASA Astrophysics Data System (ADS)

    Cui, Zhihua; Cai, Xingjuan; Zeng, Jianchao; Sun, Guoji

    Particle swarm optimization (PSO) is a robust swarm intelligent technique inspired from birds flocking and fish schooling. Though many effective improvements have been proposed, however, the premature convergence is still its main problem. Because each particle's movement is a continuous process and can be modelled with differential equation groups, a new variant, particle swarm optimization with dynamic step length (PSO-DSL), with additional control coefficient- step length, is introduced. Then the absolute stability theory is introduced to analyze the stability character of the standard PSO, the theoretical result indicates the PSO with constant step length can not always be stable, this may be one of the reason for premature convergence. Simulation results show the PSO-DSL is effective.

  10. Interpolating and Extrapolating Contaminant Concentrations from Monitor Wells to Model Grids for Fate-and-Transport Calculations

    SciTech Connect

    Ward, D. B.; Clement, P.; Bostick, K.

    2002-02-26

    Geostatistical interpolation of groundwater characterization data to visualize contaminant distributions in three dimensions is often hindered by the sparse distribution of samples relative to the size of the plume and scale of heterogeneities. Typically, placement of expensive monitoring wells is guided by the conceptualized plume rather than geostatistical considerations, focusing on contaminated areas rather than thoroughly gridding the plume boundary. The resulting data sets require careful analysis in order to produce plausible plume shells. A purely geostatistical approach is usually impractical; kriging parameters based on the observed data structure can extrapolate contamination far beyond the demonstrated extent of the plume. When more appropriate kriging parameters are selected, holes often occur in the interpolated distribution because realistic kriging ranges may not bridge large gaps between data points. Such artifacts obscure the probable location of the plume boundary and distort the contaminant distribution, obstructing quantitative modeling of remedial strategies. Two methods of constraining kriging can successfully eliminate these geostatistical artifacts. Laterally, the plume boundary may be controlled using a manually constructed mask that delineates the plan-view extent of the plume. After kriging, the mask is used to set all grid cells outside of the plume to a concentration of zero. Use of non-zero control points is a more refined but laborious approach that also bridges data gaps within the body of a plume and permits use of tighter kriging parameters. These can be obtained by manual linear interpolation between measured samples, or derived from historical data migrated along flow paths while accounting for all attenuative processes. Masking and use of non-zero control points result in a plume shell that reflects the intuition and professional judgment of the hydrologist, and can be interpolated automatically to any desired grid, providing

  11. Using dietary exposure and physiologically based pharmacokinetic/pharmacodynamic modeling in human risk extrapolations for acrylamide toxicity.

    PubMed

    Doerge, Daniel R; Young, John F; Chen, James J; Dinovi, Michael J; Henry, Sara H

    2008-08-13

    The discovery of acrylamide (AA) in many common cooked starchy foods has presented significant challenges to toxicologists, food scientists, and national regulatory and public health organizations because of the potential for producing neurotoxicity and cancer. This paper reviews some of the underlying experimental bases for AA toxicity and earlier risk assessments. Then, dietary exposure modeling is used to estimate probable AA intake in the U.S. population, and physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) modeling is used to integrate the findings of rodent neurotoxicity and cancer into estimates of risks from human AA exposure through the diet. The goal of these modeling techniques is to reduce the uncertainty inherent in extrapolating toxicological findings across species and dose by comparing common exposure biomarkers. PBPK/PD modeling estimated population-based lifetime excess cancer risks from average AA consumption in the diet in the range of 1-4 x 10 (-4); however, modeling did not support a link between dietary AA exposure and human neurotoxicity because marginal exposure ratios were 50-300 lower than in rodents. In addition, dietary exposure modeling suggests that because AA is found in so many common foods, even big changes in concentration for single foods or groups of foods would probably have a small impact on overall population-based intake and risk. These results suggest that a more holistic analysis of dietary cancer risks may be appropriate, by which potential risks from AA should be considered in conjunction with other risks and benefits from foods. PMID:18624435

  12. Cross-Species Extrapolation of Prediction Model for Lead Transfer from Soil to Corn Grain under Stress of Exogenous Lead

    PubMed Central

    Li, Zhaojun; Yang, Hua; Li, Yupeng; Long, Jian; Liang, Yongchao

    2014-01-01

    There has been increasing concern in recent years regarding lead (Pb) transfer in the soil-plant system. In this study the transfer of Pb (exogenous salts) was investigated from a wide range of Chinese soils to corn grain (Zhengdan 958). Prediction models were developed with combination of the Pb bioconcentration factor (BCF) of Zhengdan 958, and soil pH, organic matter (OM) content, and cation exchange capacity (CEC) through multiple stepwise regressions. Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the soil pH and OM were the major factors that controlled Pb transfer from soil to corn grain. The lower pH and OM could improve the bioaccumulation of Pb in corn grain. No significant differences were found between two prediction models derived from the different exogenous Pb contents. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2-fold and close to the solid line of 1∶1 relationship. Moreover, the prediction model i.e. Log[BCF] = −0.098 pH-0.150 log[OM] −1.894 at the treatment of high Pb can effectively reduce the measured BCF intra-species variability for all non-model corn species. These suggested that this prediction model derived from the high Pb content was more adaptable to be applied to other non-model corn species to predict the Pb bioconcentration in corn grain and assess the ecological risk of Pb in different agricultural soils. PMID:24416440

  13. Spectral Irradiance Calibration in the Infrared. Part 7; New Composite Spectra, Comparison with Model Atmospheres, and Far-Infrared Extrapolations

    NASA Technical Reports Server (NTRS)

    Cohen, Martin; Witteborn, Fred C.; Carbon, Duane F.; Davies, John K.; Wooden, Diane H.; Bregman, Jesse D.

    1996-01-01

    We present five new absolutely calibrated continuous stellar spectra constructed as far as possible from spectral fragments observed from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer. These stars-alpha Boo, gamma Dra, alpha Cet, gamma Cru, and mu UMa-augment our six, published, absolutely calibrated spectra of K and early-M giants. All spectra have a common calibration pedigree. A revised composite for alpha Boo has been constructed from higher quality spectral fragments than our previously published one. The spectrum of gamma Dra was created in direct response to the needs of instruments aboard the Infrared Space Observatory (ISO); this star's location near the north ecliptic pole renders it highly visible throughout the mission. We compare all our low-resolution composite spectra with Kurucz model atmospheres and find good agreement in shape, with the obvious exception of the SiO fundamental, still lacking in current grids of model atmospheres. The CO fundamental seems slightly too deep in these models, but this could reflect our use of generic models with solar metal abundances rather than models specific to the metallicities of the individual stars. Angular diameters derived from these spectra and models are in excellent agreement with the best observed diameters. The ratio of our adopted Sirius and Vega models is vindicated by spectral observations. We compare IRAS fluxes predicted from our cool stellar spectra with those observed and conclude that, at 12 and 25 microns, flux densities measured by IRAS should be revised downwards by about 4.1% and 5.7%, respectively, for consistency with our absolute calibration. We have provided extrapolated continuum versions of these spectra to 300 microns, in direct support of ISO (PHT and LWS instruments). These spectra are consistent with IRAS flux densities at 60 and 100 microns.

  14. Telomerase Activity and Telomere Length in Daphnia

    PubMed Central

    Schumpert, Charles; Nelson, Jacob; Kim, Eunsuk; Dudycha, Jeffry L.; Patel, Rekha C.

    2015-01-01

    Telomeres, comprised of short repetitive sequences, are essential for genome stability and have been studied in relation to cellular senescence and aging. Telomerase, the enzyme that adds telomeric repeats to chromosome ends, is essential for maintaining the overall telomere length. A lack of telomerase activity in mammalian somatic cells results in progressive shortening of telomeres with each cellular replication event. Mammals exhibit high rates of cell proliferation during embryonic and juvenile stages but very little somatic cell proliferation occurs during adult and senescent stages. The telomere hypothesis of cellular aging states that telomeres serve as an internal mitotic clock and telomere length erosion leads to cellular senescence and eventual cell death. In this report, we have examined telomerase activity, processivity, and telomere length in Daphnia, an organism that grows continuously throughout its life. Similar to insects, Daphnia telomeric repeat sequence was determined to be TTAGG and telomerase products with five-nucleotide periodicity were generated in the telomerase activity assay. We investigated telomerase function and telomere lengths in two closely related ecotypes of Daphnia with divergent lifespans, short-lived D. pulex and long-lived D. pulicaria. Our results indicate that there is no age-dependent decline in telomere length, telomerase activity, or processivity in short-lived D. pulex. On the contrary, a significant age dependent decline in telomere length, telomerase activity and processivity is observed during life span in long-lived D. pulicaria. While providing the first report on characterization of Daphnia telomeres and telomerase activity, our results also indicate that mechanisms other than telomere shortening may be responsible for the strikingly short life span of D. pulex. PMID:25962144

  15. Telomerase activity and telomere length in Daphnia.

    PubMed

    Schumpert, Charles; Nelson, Jacob; Kim, Eunsuk; Dudycha, Jeffry L; Patel, Rekha C

    2015-01-01

    Telomeres, comprised of short repetitive sequences, are essential for genome stability and have been studied in relation to cellular senescence and aging. Telomerase, the enzyme that adds telomeric repeats to chromosome ends, is essential for maintaining the overall telomere length. A lack of telomerase activity in mammalian somatic cells results in progressive shortening of telomeres with each cellular replication event. Mammals exhibit high rates of cell proliferation during embryonic and juvenile stages but very little somatic cell proliferation occurs during adult and senescent stages. The telomere hypothesis of cellular aging states that telomeres serve as an internal mitotic clock and telomere length erosion leads to cellular senescence and eventual cell death. In this report, we have examined telomerase activity, processivity, and telomere length in Daphnia, an organism that grows continuously throughout its life. Similar to insects, Daphnia telomeric repeat sequence was determined to be TTAGG and telomerase products with five-nucleotide periodicity were generated in the telomerase activity assay. We investigated telomerase function and telomere lengths in two closely related ecotypes of Daphnia with divergent lifespans, short-lived D. pulex and long-lived D. pulicaria. Our results indicate that there is no age-dependent decline in telomere length, telomerase activity, or processivity in short-lived D. pulex. On the contrary, a significant age dependent decline in telomere length, telomerase activity and processivity is observed during life span in long-lived D. pulicaria. While providing the first report on characterization of Daphnia telomeres and telomerase activity, our results also indicate that mechanisms other than telomere shortening may be responsible for the strikingly short life span of D. pulex.

  16. Localization length fluctuation in randomly layered media

    NASA Astrophysics Data System (ADS)

    Yuan, Haiming; Huang, Feng; Jiang, Xiangqian; Sun, Xiudong

    2016-10-01

    Localization properties of the two-component randomly layered media (RLM) are studied in detail both analytically and numerically. The localization length is found fluctuating around the analytical result obtained under the high-frequency limit. The fluctuation amplitude approaches zero with the increasing of disorder, which is characterized by the distribution width of random thickness. It is also found that the localization length over the mean thickness periodically varies with the distribution center of random thickness. For the multi-component RLM structure, the arrangement of material must be considered.

  17. The minimal length and quantum partition functions

    NASA Astrophysics Data System (ADS)

    Abbasiyan-Motlaq, M.; Pedram, P.

    2014-08-01

    We study the thermodynamics of various physical systems in the framework of the generalized uncertainty principle that implies a minimal length uncertainty proportional to the Planck length. We present a general scheme to analytically calculate the quantum partition function of the physical systems to first order of the deformation parameter based on the behavior of the modified energy spectrum and compare our results with the classical approach. Also, we find the modified internal energy and heat capacity of the systems for the anti-Snyder framework.

  18. Environmental correlates of food chain length.

    PubMed

    Briand, F; Cohen, J E

    1987-11-13

    In 113 community food webs from natural communities, the average and maximal lengths of food chains are independent of primary productivity, contrary to the hypothesis that longer food chains should arise when more energy is available at their base. Environmental variability alone also does not appear to constrain average or maximal chain length. Environments that are three dimensional or solid, however, such as a forest canopy or the water column of the open ocean, have distinctly longer food chains than environments that are two dimensional or flat, such as a grassland or lake bottom.

  19. How Cells Measure Length on Subcellular Scales.

    PubMed

    Marshall, Wallace F

    2015-12-01

    Cells are not just amorphous bags of enzymes, but precise and complex machines. With any machine, it is important that the parts be of the right size, yet our understanding of the mechanisms that control size of cellular structures remains at a rudimentary level in most cases. One problem with studying size control is that many cellular organelles have complex 3D structures that make their size hard to measure. Here we focus on linear structures within cells, for which the problem of size control reduces to the problem of length control. We compare and contrast potential mechanisms for length control to understand how cells solve simple geometry problems. PMID:26437596

  20. Crystal diffraction lens with variable focal length

    DOEpatents

    Smither, Robert K.

    1991-01-01

    A method and apparatus for altering the focal length of a focusing element o one of a plurality of pre-determined focal lengths by changing heat transfer within selected portions of the element by controlled quantities. Control over heat transfer is accomplished by manipulating one or more of a number of variables, including: the amount of heat or cold applied to surfaces; type of fluids pumped through channels for heating and cooling; temperatures, directions of flow and rates of flow of fluids; and placement of channels.

  1. Crystal diffraction lens with variable focal length

    DOEpatents

    Smither, R.K.

    1991-04-02

    A method and apparatus for altering the focal length of a focusing element of one of a plurality of pre-determined focal lengths by changing heat transfer within selected portions of the element by controlled quantities is disclosed. Control over heat transfer is accomplished by manipulating one or more of a number of variables, including: the amount of heat or cold applied to surfaces; type of fluids pumped through channels for heating and cooling; temperatures, directions of flow and rates of flow of fluids; and placement of channels. 19 figures.

  2. Apparatus for fabricating continuous lengths of superconductor

    DOEpatents

    Kroeger, Donald M.; List, III, Frederick A.

    2002-01-01

    A process and apparatus for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor comprising a layer of said superconducting precursor powder between said first substrate ribbon and said second substrates ribbon. The layered superconductor is then heat treated to establish the superconducting phase of said superconductor precursor powder.

  3. Apparatus for fabricating continuous lengths of superconductor

    DOEpatents

    Kroeger, Donald M.; List, III, Frederick A.

    2001-01-01

    A process and apparatus for manufacturing a superconductor. The process is accomplished by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon, overlaying a continuous length of a second substrate ribbon on said first substrate ribbon, and applying sufficient pressure to form a bound layered superconductor comprising a layer of said superconducting precursor powder between said first substrate ribbon and said second substrates ribbon. The layered superconductor is then heat treated to establish the superconducting phase of said superconductor precursor powder.

  4. Sighting optics including an optical element having a first focal length and a second focal length

    SciTech Connect

    Crandall, David Lynn

    2011-08-01

    One embodiment of sighting optics according to the teachings provided herein may include a front sight and a rear sight positioned in spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus, for a user, images of the front sight and the target.

  5. Study of tooth length and working length of first permanent molar in Bangladeshi people.

    PubMed

    Alam, M S; Aziz-us-salam; Prajapati, K; Rai, Pujan; Molla, A A

    2004-04-01

    A study of 428 endodontically treated 1st permanent molar teeth of both jaws was performed to identify the variation in tooth length of Bangladeshi people. In this study, radiographic method, Ingle's method was used and mathematical calculation as purposed by Messing was used to measure the length of the individual canal. The method involves measurement of the length in preoperative radiograph followed by clinical evaluation with diagnostic radiograph. The working length of each canal was finally calculated comparing both pre-operative and diagnostic radiograph. Study revealed that average length of upper 1st molar is 20.62mm and for lower 1st molar is 20.28mm; the range of length for upper 1st molar is 17.16mm - 25.33mm and for lower 16mm - 24mm. The study also revealed that the tooth length has no significance on the sex of the people of same race. To verify the results of the study, statistical tools were applied on a randomly selected sample of 100 patients and the statistical tests also supports the findings of the study. The results also indicate that the tooth length of Bangladeshi people is shorter than their Caucasoid counterpart. In previous studies performed by different researcher and given in different textbook of Endodontic shows that the length of tooth of Caucasian people is longer than this study. PMID:15376468

  6. Relationship of gestation length to stillbirth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Stillbirth (SB) genetic evaluations recently instituted reflect increased interest in broadening the array of traits considered in assessing overall genetic merit. Gestation length (GL) is not yet evaluated in the United States, but has economic and managemental impacts. The relationship of SB with ...

  7. Hydrodynamic slip length as a surface property.

    PubMed

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G P

    2016-02-01

    Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems. PMID:26986407

  8. Bunch length measurements using synchrotron ligth monitor

    SciTech Connect

    Ahmad, Mahmoud; Tiefenback, Michael G.

    2015-09-01

    The bunch length is measured at CEBAF using an invasive technique. The technique depends on applying an energy chirp for the electron bunch and imaging it through a dispersive region. The measurements are taken through Arc1 and Arc2 at CEBAF. The fundamental equations, procedure and the latest results are given.

  9. Exploring Segment Lengths on the Geoboard

    ERIC Educational Resources Information Center

    Ellis, Mark W.; Pagni, David

    2008-01-01

    Given a 5-peg by 5-peg geoboard, how many different lengths can be made by stretching a rubber band to form an oblique segment between any two pegs? This investigation requires students to make connections to the Pythagorean theorem, congruence, and combinations. With its use of visual representation and a range of mathematical ideas that can be…

  10. Coding Ropes For Length And Speed Measurements

    NASA Technical Reports Server (NTRS)

    Rupp, Charles C.; Tiesenhausen, Georg Von

    1988-01-01

    Ferromagnetic staples serve as markers. Like crude magnetic-tape-playback head, sensor detects ferromagnetic staples as rope is unwound or wound. Pulses from staples analyzed electronically; numbers of pulses and intervals between them interpreted in terms of velocity of rope and length payed out. Adaptable to laying submarine cables and contstruction of suspension bridges.

  11. Hydrodynamic slip length as a surface property

    NASA Astrophysics Data System (ADS)

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.

    2016-02-01

    Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems.

  12. The persistence length of adsorbed dendronized polymers.

    PubMed

    Grebikova, Lucie; Kozhuharov, Svilen; Maroni, Plinio; Mikhaylov, Andrey; Dietler, Giovanni; Schlüter, A Dieter; Ullner, Magnus; Borkovec, Michal

    2016-07-21

    The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth generation polymer adsorbed on mica, which is a hydrophilic and highly charged substrate. However, the observed dependence on the ionic strength is much weaker than the one predicted by the Odijk, Skolnik, and Fixman (OSF) theory for semi-flexible chains. Low-generation polymers show a variation with the ionic strength that resembles the one observed for simple and flexible polyelectrolytes in solution. For high-generation polymers, this dependence is weaker. Similar dependencies are found for silica and gold substrates. The observed behavior is probably caused by different extents of screening of the charged groups, which is modified by the polymer generation, and to a lesser extent, the nature of the substrate. For highly ordered pyrolytic graphite (HOPG), which is a hydrophobic and weakly charged substrate, the electrostatic contribution to the persistence length is much smaller. In the latter case, we suspect that specific interactions between the polymer and the substrate also play an important role. PMID:27353115

  13. Report of the magnet length workshop

    SciTech Connect

    1985-12-31

    A meeting was held at the Central Design Group (CDG), to discuss magnet length and to recommend a length for the planned Conceptual Design Report (CDR) as well as for magnet R and D. This report is a summary of the findings. Included is the letter from C. Taylor, CDG, convening the meeting, the proposed agenda, a summary of the results, and an appendix containing information presented at the meeting. The discussion mainly centered around 4, 5, and 6 dipoles per (100 m) half-cell. The magnetic lengths are approximately 16.6 m per dipole (the ROS length as well as that of the first R and D magnet now under construction), 20.75 m for four dipoles per half-cell, and 13.8 m for six dipoles per half-cell. Cost estimates are given. The apparent cost advantage of the longer units could be partially offset if the aperture can be adjusted to take advantage of a more uniform average magnetic field that could be realized by sorting. This sorting can be more effective with 20% more (shorter) magnets in the machine.

  14. Bunch Length Measurements at JLab FEL

    SciTech Connect

    P. Evtushenko; J. L. Coleman; K. Jordan; J. M. Klopf; G. Neil; G. P. Williams

    2006-09-01

    The JLab FEL is routinely operated with sub-picosecond bunches. The short bunch length is important for high gain of the FEL. Coherent transition radiation has been used for the bunch length measurements for many years. This diagnostic can be used only in the pulsed beam mode. It is our goal to run FEL with CW beam and 74.85 MHz micropulse repetition rate. Hence it is very desirable to have the possibility of doing the bunch length measurements when running CW beam with any micropulse frequency. We use a Fourier transform infrared interferometer, which is essentially a Michelson interferometer, to measure the spectrum of the coherent synchrotron radiation generated in the last dipole of the magnetic bunch compressor upstream of the FEL wiggler. This noninvasive diagnostic provides the bunch length measurements for CW beam operation at any micropulse frequency. We also compare the measurements made with the help of the FTIR interferometer with the data obtained by the Martin-Puplett interferometer. Results of the two diagnostics are usually agree within 15%. Here we present a description of the experimental setup, data evaluation procedure and results of the beam measurements.

  15. Hydrodynamic slip length as a surface property.

    PubMed

    Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G P

    2016-02-01

    Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems.

  16. Fall Colors, Temperature, and Day Length

    ERIC Educational Resources Information Center

    Burton, Stephen; Miller, Heather; Roossinck, Carrie

    2007-01-01

    Along with the bright hues of orange, red, and yellow, the season of fall represents significant changes, such as day length and temperature. These changes provide excellent opportunities for students to use science process skills to examine how abiotic factors such as weather and temperature impact organisms. In this article, the authors describe…

  17. Quark screening lengths in finite temperature QCD

    SciTech Connect

    Gocksch, A. California Univ., Santa Barbara, CA . Inst. for Theoretical Physics)

    1990-11-01

    We have computed Landau gauge quark propagators in both the confined and deconfined phase of QCD. I discuss the magnitude of the resulting screening lengths as well as aspects of chiral symmetry relevant to the quark propagator. 12 refs., 1 fig., 1 tab.

  18. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  19. Short day lengths delay reproductive aging.

    PubMed

    Place, Ned J; Tuthill, Christiana R; Schoomer, Elanor E; Tramontin, Anthony D; Zucker, Irving

    2004-09-01

    Caloric restriction and hormone treatment delay reproductive senescence in female mammals, but a natural model of decelerated reproductive aging does not presently exist. In addition to describing such a model, this study shows that an abiotic signal (photoperiod) can induce physiological changes that slow senescence. Relative to animals born in April, rodents born in September delay their first reproductive effort by up to 7 mo, at which age reduced fertility is expected. We tested the hypothesis that the shorter day lengths experienced by late-born Siberian hamsters ameliorate the reproductive decline associated with advancing age. Short-day females (10L:14D) achieved puberty at a much later age than long-day animals (14L:10D) and had twice as many ovarian primordial follicles. At 10 mo of age, 86% of females previously maintained in short day lengths produced litters, compared with 58% of their long day counterparts. Changes in pineal gland production of melatonin appear to mediate the effects of day length on reproductive aging; only 30% of pinealectomized females housed in short days produced litters. Exposure to short days induces substantial decreases in voluntary food intake and body mass, reduced ovarian estradiol secretion, and enhanced production of melatonin. One or more of these changes may account for the protective effect of short day lengths on female reproduction. In delaying reproductive senescence, the decrease in day length after the summer solstice is of presumed adaptive significance for offspring born late in the breeding season that first breed at an advanced chronological age.

  20. It's about Time! Increasing the Length of Student Classroom Writing without Setting Length Constraints.

    ERIC Educational Resources Information Center

    Passman, Roger

    This paper grew out of the collaborative relationship that emerged from in-class modeling of student-centered writing approaches as participating teachers and a consultant/researcher began to explore ways to increase the length of fourth-grade writing. The paper reports on a small study in fourth-grade writing aimed at increasing the length of…

  1. Individual sarcomere lengths in whole muscle fibers and optimal fiber length computation.

    PubMed

    Infantolino, Benjamin W; Ellis, Michael J; Challis, John H

    2010-11-01

    Estimation of muscle fiber optimum length is typically accomplished using either laser diffraction or by counting the number of sarcomeres in a portion of the muscle fiber, measuring the distance that encompasses those sarcomeres and dividing by the number of sarcomeres to obtain an average sarcomere length. If the sarcomeres are not uniformly distributed, either of these techniques could produce errors when estimating optimum lengths. The purposes of this study were: to describe new software that automatically analyzes digital images of skeletal muscle fibers to measure individual sarcomere lengths; and to use this software to measure individual sarcomere lengths along complete muscle fibers to examine the influence of computing whole muscle fiber properties from portions of the fiber. Six complete muscle fibers were imaged using a digital camera attached to a microscope. The images were then processed to achieve the best resolution possible, individual sarcomeres along the image were detected, and each individual sarcomere length was measured. The software accuracy was compared with that of manual measurement and was found to be as accurate. In addition, the time to measure individual sarcomere lengths was greatly reduced using the software compared with manual measurement. The arrangement of individual sarcomere lengths demonstrated long-range correlations, which indicates problems in assuming only a portion of a fiber can be used to determine whole fiber properties. This study has provided evidence on the number of sarcomeres which must be analyzed to infer the properties of whole muscles.

  2. On Sources of the Word Length Effect in Young Readers

    ERIC Educational Resources Information Center

    Gagl, Benjamin; Hawelka, Stefan; Wimmer, Heinz

    2015-01-01

    We investigated how letter length, phoneme length, and consonant clusters contribute to the word length effect in 2nd- and 4th-grade children. They read words from three different conditions: In one condition, letter length increased but phoneme length did not due to multiletter graphemes (H"aus"-B"auch"-S"chach"). In…

  3. Employing genome-wide SNP discovery and genotyping strategy to extrapolate the natural allelic diversity and domestication patterns in chickpea

    PubMed Central

    Kujur, Alice; Bajaj, Deepak; Upadhyaya, Hari D.; Das, Shouvik; Ranjan, Rajeev; Shree, Tanima; Saxena, Maneesha S.; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C. L. L.; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K.; Parida, Swarup K.

    2015-01-01

    The genome-wide discovery and high-throughput genotyping of SNPs in chickpea natural germplasm lines is indispensable to extrapolate their natural allelic diversity, domestication, and linkage disequilibrium (LD) patterns leading to the genetic enhancement of this vital legume crop. We discovered 44,844 high-quality SNPs by sequencing of 93 diverse cultivated desi, kabuli, and wild chickpea accessions using reference genome- and de novo-based GBS (genotyping-by-sequencing) assays that were physically mapped across eight chromosomes of desi and kabuli. Of these, 22,542 SNPs were structurally annotated in different coding and non-coding sequence components of genes. Genes with 3296 non-synonymous and 269 regulatory SNPs could functionally differentiate accessions based on their contrasting agronomic traits. A high experimental validation success rate (92%) and reproducibility (100%) along with strong sensitivity (93–96%) and specificity (99%) of GBS-based SNPs was observed. This infers the robustness of GBS as a high-throughput assay for rapid large-scale mining and genotyping of genome-wide SNPs in chickpea with sub-optimal use of resources. With 23,798 genome-wide SNPs, a relatively high intra-specific polymorphic potential (49.5%) and broader molecular diversity (13–89%)/functional allelic diversity (18–77%) was apparent among 93 chickpea accessions, suggesting their tremendous applicability in rapid selection of desirable diverse accessions/inter-specific hybrids in chickpea crossbred varietal improvement program. The genome-wide SNPs revealed complex admixed domestication pattern, extensive LD estimates (0.54–0.68) and extended LD decay (400–500 kb) in a structured population inclusive of 93 accessions. These findings reflect the utility of our identified SNPs for subsequent genome-wide association study (GWAS) and selective sweep-based domestication trait dissection analysis to identify potential genomic loci (gene-associated targets) specifically

  4. Employing genome-wide SNP discovery and genotyping strategy to extrapolate the natural allelic diversity and domestication patterns in chickpea.

    PubMed

    Kujur, Alice; Bajaj, Deepak; Upadhyaya, Hari D; Das, Shouvik; Ranjan, Rajeev; Shree, Tanima; Saxena, Maneesha S; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C L L; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K; Parida, Swarup K

    2015-01-01

    The genome-wide discovery and high-throughput genotyping of SNPs in chickpea natural germplasm lines is indispensable to extrapolate their natural allelic diversity, domestication, and linkage disequilibrium (LD) patterns leading to the genetic enhancement of this vital legume crop. We discovered 44,844 high-quality SNPs by sequencing of 93 diverse cultivated desi, kabuli, and wild chickpea accessions using reference genome- and de novo-based GBS (genotyping-by-sequencing) assays that were physically mapped across eight chromosomes of desi and kabuli. Of these, 22,542 SNPs were structurally annotated in different coding and non-coding sequence components of genes. Genes with 3296 non-synonymous and 269 regulatory SNPs could functionally differentiate accessions based on their contrasting agronomic traits. A high experimental validation success rate (92%) and reproducibility (100%) along with strong sensitivity (93-96%) and specificity (99%) of GBS-based SNPs was observed. This infers the robustness of GBS as a high-throughput assay for rapid large-scale mining and genotyping of genome-wide SNPs in chickpea with sub-optimal use of resources. With 23,798 genome-wide SNPs, a relatively high intra-specific polymorphic potential (49.5%) and broader molecular diversity (13-89%)/functional allelic diversity (18-77%) was apparent among 93 chickpea accessions, suggesting their tremendous applicability in rapid selection of desirable diverse accessions/inter-specific hybrids in chickpea crossbred varietal improvement program. The genome-wide SNPs revealed complex admixed domestication pattern, extensive LD estimates (0.54-0.68) and extended LD decay (400-500 kb) in a structured population inclusive of 93 accessions. These findings reflect the utility of our identified SNPs for subsequent genome-wide association study (GWAS) and selective sweep-based domestication trait dissection analysis to identify potential genomic loci (gene-associated targets) specifically regulating

  5. Uncertainties in Modelling Glacier Melt and Mass Balances: the Role of Air Temperature Extrapolation and Type of Melt Models

    NASA Astrophysics Data System (ADS)

    Pellicciotti, F.; Ragettli, S.; Carenzo, M.; Ayala, A.; McPhee, J. P.; Stoffel, M.

    2014-12-01

    While glacier responses to climate are understood in general terms and in their main trends, model based projections are affected by the type of model used and uncertainties in the meteorological input data, among others. Recent works have attempted at improving glacio-hydrological models by including neglected processes and investigating uncertainties in their outputs. In this work, we select two knowledge gaps in current modelling practices and illustrate their importance through modelling with a fully distributed mass balance model that includes some of the state of the art approaches for calculations of glacier ablation, accumulation and glacier geometry changes. We use an advanced mass balance model applied to glaciers in the Andes of Chile, Swiss Alps and Nepalese Himalaya to investigate two issues that seem of importance for a sound assessment of glacier changes: 1) the use of physically-based models of glacier ablation (energy balance) versus more empirical models (enhanced temperature index approaches); 2) the importance of the correct extrapolation of air temperature forcing on glaciers and the large uncertainty in model outputs associated with it. The ablation models are calibrated with a large amount of data from in-situ campaigns, and distributed observations of air temperature used to calculate lapse rates and calibrate a thermodynamic model of temperature distribution. We show that no final assessment can be made of what type of melt model is more appropriate or accurate for simulation of glacier ablation at the glacier scale, not even for relatively well studied glaciers. Both models perform in a similar manner at low elevations, but important differences are evident at high elevations, where lack of data prevents a final statement on which model better represent the actual ablation amounts. Accurate characterization of air temperature is important for correct simulations of glacier mass balance and volume changes. Substantial differences are

  6. Measuring sperm whales from their clicks: Stability of interpulse intervals and validation that they indicate whale length

    NASA Astrophysics Data System (ADS)

    Rhinelander, Marcus Q.; Dawson, Stephen M.

    2004-04-01

    Multiple pulses can often be distinguished in the clicks of sperm whales (Physeter macrocephalus). Norris and Harvey [in Animal Orientation and Navigation, NASA SP-262 (1972), pp. 397-417] proposed that this results from reflections within the head, and thus that interpulse interval (IPI) is an indicator of head length, and by extrapolation, total length. For this idea to hold, IPIs must be stable within individuals, but differ systematically among individuals of different size. IPI stability was examined in photographically identified individuals recorded repeatedly over different dives, days, and years. IPI variation among dives in a single day and days in a single year was statistically significant, although small in magnitude (it would change total length estimates by <3%). As expected, IPIs varied significantly among individuals. Most individuals showed significant increases in IPIs over several years, suggesting growth. Mean total lengths calculated from published IPI regressions were 13.1 to 16.1 m, longer than photogrammetric estimates of the same whales (12.3 to 15.3 m). These discrepancies probably arise from the paucity of large (12-16 m) whales in data used in published regressions. A new regression is offered for this size range.

  7. Measuring sperm whales from their clicks: stability of interpulse intervals and validation that they indicate whale length.

    PubMed

    Rhinelander, Marcus Q; Dawson, Stephen M

    2004-04-01

    Multiple pulses can often be distinguished in the clicks of sperm whales (Physeter macrocephalus). Norris and Harvey [in Animal Orientation and Navigation, NASA SP-262 (1972), pp. 397-417] proposed that this results from reflections within the head, and thus that interpulse interval (IPI) is an indicator of head length, and by extrapolation, total length. For this idea to hold, IPIs must be stable within individuals, but differ systematically among individuals of different size. IPI stability was examined in photographically identified individuals recorded repeatedly over different dives, days, and years. IPI variation among dives in a single day and days in a single year was statistically significant, although small in magnitude (it would change total length estimates by <3%). As expected, IPIs varied significantly among individuals. Most individuals showed significant increases in IPIs over several years, suggesting growth. Mean total lengths calculated from published IPI regressions were 13.1 to 16.1 m, longer than photogrammetric estimates of the same whales (12.3 to 15.3 m). These discrepancies probably arise from the paucity of large (12-16 m) whales in data used in published regressions. A new regression is offered for this size range.

  8. The seasonal exchange of carbon dioxide between the atmosphere and the terrestrial biosphere: Extrapolation from site-specific models to regional models

    SciTech Connect

    King, A.W.; DeAngelis, D.L.; Post, W.M.

    1987-12-01

    Ecological models of the seasonal exchange of carbon dioxide (CO/sub 2/) between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. The primary disadvantage of this approach is the problem of extrapolating the site-specific models across large regions having considerable biotic, climatic, and edaphic heterogeneity. Two methods of extrapolation were tested. 142 refs., 59 figs., 47 tabs

  9. High order eigenvalues for the Helmholtz equation in complicated non-tensor domains through Richardson extrapolation of second order finite differences

    NASA Astrophysics Data System (ADS)

    Amore, Paolo; Boyd, John P.; Fernández, Francisco M.; Rösler, Boris

    2016-05-01

    We apply second order finite differences to calculate the lowest eigenvalues of the Helmholtz equation, for complicated non-tensor domains in the plane, using different grids which sample exactly the border of the domain. We show that the results obtained applying Richardson and Padé-Richardson extrapolations to a set of finite difference eigenvalues corresponding to different grids allow us to obtain extremely precise values. When possible we have assessed the precision of our extrapolations comparing them with the highly precise results obtained using the method of particular solutions. Our empirical findings suggest an asymptotic nature of the FD series. In all the cases studied, we are able to report numerical results which are more precise than those available in the literature.

  10. Length and Dimensional Measurements at NIST

    PubMed Central

    Swyt, Dennis A.

    2001-01-01

    This paper discusses the past, present, and future of length and dimensional measurements at NIST. It covers the evolution of the SI unit of length through its three definitions and the evolution of NBS-NIST dimensional measurement from early linescales and gage blocks to a future of atom-based dimensional standards. Current capabilities include dimensional measurements over a range of fourteen orders of magnitude. Uncertainties of measurements on different types of material artifacts range down to 7×10−8 m at 1 m and 8 picometers (pm) at 300 pm. Current work deals with a broad range of areas of dimensional metrology. These include: large-scale coordinate systems; complex form; microform; surface finish; two-dimensional grids; optical, scanning-electron, atomic-force, and scanning-tunneling microscopies; atomic-scale displacement; and atom-based artifacts. PMID:27500015

  11. Concession length and investment timing flexibility

    NASA Astrophysics Data System (ADS)

    D'Alpaos, Chiara; Dosi, Cesare; Moretto, Michele

    2006-02-01

    When assigning a concession contract, the regulator faces the issue of setting the concession length. Another key issue is whether or not the concessionaire should be allowed to set the timing of new investments. In this paper we investigate the impact of concession length and investment timing flexibility on the "concession value." It is generally argued that long-term contracts are privately valuable as they enable a concessionaire to increase its overall discounted returns. Moreover, the real option theory suggests that investment flexibility has an intrinsic value, as it allows concessionaires to avoid costly errors. By combining these two conventional wisdoms one may argue that long-term contracts, which allow for investment timing flexibility, should always result in higher concession values. Our result suggests that this is not always the case; that is, investment flexibility and long-term contracts do not necessarily increase the concession value.

  12. Random Test Run Length and Effectiveness

    NASA Technical Reports Server (NTRS)

    Andrews, James H.; Groce, Alex; Weston, Melissa; Xu, Ru-Gang

    2008-01-01

    A poorly understood but important factor in many applications of random testing is the selection of a maximum length for test runs. Given a limited time for testing, it is seldom clear whether executing a small number of long runs or a large number of short runs maximizes utility. It is generally expected that longer runs are more likely to expose failures -- which is certainly true with respect to runs shorter than the shortest failing trace. However, longer runs produce longer failing traces, requiring more effort from humans in debugging or more resources for automated minimization. In testing with feedback, increasing ranges for parameters may also cause the probability of failure to decrease in longer runs. We show that the choice of test length dramatically impacts the effectiveness of random testing, and that the patterns observed in simple models and predicted by analysis are useful in understanding effects observed.

  13. Distance and Cable Length Measurement System

    PubMed Central

    Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay

    2009-01-01

    A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169

  14. How Heavy Is an Illusory Length?

    PubMed Central

    de Brouwer, Anouk J.; Smeets, Jeroen B. J.

    2016-01-01

    The perception of object properties, such as size and weight, can be subject to illusions. Could a visual size illusion influence perceived weight? Here, we tested whether the size-weight illusion occurs when lifting two physically identical but perceptually different objects, by using an illusion of size. Participants judged the weight and length of 11 to 17 cm brass bars with equal density to which cardboard arrowheads were attached to create a Müller–Lyer illusion. We found that these stimuli induced an illusion in which the bar that was visually perceived as being shorter was also perceived as feeling heavier. In fact, a 5-mm increase in illusory length corresponded to a decrease in illusory weight of 15 g. PMID:27708753

  15. Distribution of bubble lengths in DNA.

    PubMed

    Ares, S; Kalosakas, G

    2007-02-01

    The distribution of bubble lengths in double-stranded DNA is presented for segments of varying guanine-cytosine (GC) content, obtained with Monte Carlo simulations using the Peyrard-Bishop-Dauxois model at 310 K. An analytical description of the obtained distribution in the whole regime investigated, i.e., up to bubble widths of the order of tens of nanometers, is available. We find that the decay lengths and characteristic exponents of this distribution show two distinct regimes as a function of GC content. The observed distribution is attributed to the anharmonic interactions within base pairs. The results are discussed in the framework of the Poland-Scheraga and the Peyrard-Bishop (with linear instead of nonlinear stacking interaction) models.

  16. Plasmodium vivax genetic diversity: microsatellite length matters.

    PubMed

    Russell, Bruce; Suwanarusk, Rossarin; Lek-Uthai, Usa

    2006-09-01

    The Plasmodium vivax genome is very diverse but has a relatively low abundance of microsatellites. Leclerc et al. had shown that these di-nucleotide repeats have a low level of polymorphism, suggesting a recent bottleneck event in the evolutionary history of P. vivax. By contrast, in a recent paper, Imwong et al. show that there is a very high level of microsatellite diversity. The difference in these results is probably due to the set array lengths chosen by each group. Longer arrays are more diverse than are shorter ones because slippage mutations become exponentially more common with an increase in array length. These studies highlight the need to consider carefully the application and design of studies involving microsatellites.

  17. The persistence length of adsorbed dendronized polymers

    NASA Astrophysics Data System (ADS)

    Grebikova, Lucie; Kozhuharov, Svilen; Maroni, Plinio; Mikhaylov, Andrey; Dietler, Giovanni; Schlüter, A. Dieter; Ullner, Magnus; Borkovec, Michal

    2016-07-01

    The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth generation polymer adsorbed on mica, which is a hydrophilic and highly charged substrate. However, the observed dependence on the ionic strength is much weaker than the one predicted by the Odijk, Skolnik, and Fixman (OSF) theory for semi-flexible chains. Low-generation polymers show a variation with the ionic strength that resembles the one observed for simple and flexible polyelectrolytes in solution. For high-generation polymers, this dependence is weaker. Similar dependencies are found for silica and gold substrates. The observed behavior is probably caused by different extents of screening of the charged groups, which is modified by the polymer generation, and to a lesser extent, the nature of the substrate. For highly ordered pyrolytic graphite (HOPG), which is a hydrophobic and weakly charged substrate, the electrostatic contribution to the persistence length is much smaller. In the latter case, we suspect that specific interactions between the polymer and the substrate also play an important role.The persistence length of cationic dendronized polymers adsorbed onto oppositely charged substrates was studied by atomic force microscopy (AFM) and quantitative image analysis. One can find that a decrease in the ionic strength leads to an increase of the persistence length, but the nature of the substrate and of the generation of the side dendrons influence the persistence length substantially. The strongest effects as the ionic strength is being changed are observed for the fourth

  18. Identification of the viscoelastic properties of soft materials at low frequency: performance, ill-conditioning and extrapolation capabilities of fractional and exponential models.

    PubMed

    Ciambella, J; Paolone, A; Vidoli, S

    2014-09-01

    We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges.

  19. New technique of azimuthal ambiguity resolution and non-linear force-free extrapolation applicable to near-limb magnetic regions

    NASA Astrophysics Data System (ADS)

    Rudenko, George; Myshyakov, Ivan; Anfinogentov, Sergey

    A possibility for satisfactory removing of azimuthal ambiguity in the transverse field of vector magnetograms and the extrapolation of magnetic fields independently of the region position on the solar disk is shown. It is demonstrated an exact correspondence between the calculated field and the nonpotential loop structure in a near-limb region. The new technique of azimuthal ambiguity removing consists of the following parts: -translation of data in the form of artificial Stokes parameters into the working "quasi-spherical" coordinate system with subsequent smoothing to reduce noise component of the transverse field and with the inverse transformation to the vector form; -FFT extrapolation of the boundary potential field with constant direction of the oblique derivative corresponding to the observed line-of-sight component in the "quasi-spherical" coordinate system; -modification of the Metropolis's minimum energy method to spherical geometry with no need for data grid uniformity. Based on a version of the optimization method from Rudenko and Myshyakov (2009, Solar Phys. V. 257, 28), we use magnetograms corrected with modification of the Metropolis's method as boundary conditions for magnetic field extrapolation in the nonlinear force-free approximation.

  20. Asymptotic safety, emergence and minimal length

    NASA Astrophysics Data System (ADS)

    Percacci, Roberto; Vacca, Gian Paolo

    2010-12-01

    There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that (1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and (2) there is a sense in which asymptotic safety implies a minimal length. In doing so we also discuss possible signatures of asymptotic safety in scattering experiments.

  1. A NOTE ON PERPENDICULAR SCATTERING LENGTHS

    SciTech Connect

    Tautz, R. C.

    2009-10-01

    The problem of cosmic ray diffusion in magnetostatic slab turbulence is revisited. It is known that, for large timescales, the perpendicular diffusion coefficient is subdiffusive. Although, for small timescales, the field line random walk limit should apply, it is shown that the perpendicular motion is dominated by the Larmor orbit, and that no constant scattering length can be seen. It is therefore concluded that, in magnetostatic slab turbulence, perpendicular transport is completely suppressed.

  2. Slip length crossover on a graphene surface

    SciTech Connect

    Liang, Zhi; Keblinski, Pawel

    2015-04-07

    Using equilibrium and non-equilibrium molecular dynamics simulations, we study the flow of argon fluid above the critical temperature in a planar nanochannel delimited by graphene walls. We observe that, as a function of pressure, the slip length first decreases due to the decreasing mean free path of gas molecules, reaches the minimum value when the pressure is close to the critical pressure, and then increases with further increase in pressure. We demonstrate that the slip length increase at high pressures is due to the fact that the viscosity of fluid increases much faster with pressure than the friction coefficient between the fluid and the graphene. This behavior is clearly exhibited in the case of graphene due to a very smooth potential landscape originating from a very high atomic density of graphene planes. By contrast, on surfaces with lower atomic density, such as an (100) Au surface, the slip length for high fluid pressures is essentially zero, regardless of the nature of interaction between fluid and the solid wall.

  3. Genomic sorting with length-weighted reversals.

    PubMed

    Pinter, Ron Y; Skiena, Steven

    2002-01-01

    Current algorithmic studies of genome rearrangement ignore the length of reversals (or inversions); rather, they only count their number. We introduce a new cost model in which the lengths of the reversed sequences play a role, allowing more flexibility in accounting for mutation phenomena. Our focus is on sorting unsigned (unoriented) permutations by reversals; since this problem remains difficult (NP-hard) in our new model, the best we can hope for are approximation results. We propose an efficient, novel algorithm that takes (a monotonic function f of) length into account as an optimization criterion and study its properties. Our results include an upper bound of O(fn lg2n) for any additive cost measure f on the cost of sorting any n-element permutation, and a guaranteed approximation ratio of O(lg2n) times optimal for sorting a given permutation. Our work poses some interesting questions to both biologists and computer scientists and suggests some new bioinformatic insights that are currently being studied. PMID:14571379

  4. Measuring scattering lengths of gaseous samples

    NASA Astrophysics Data System (ADS)

    Huber, M. G.; Black, T. C.; Haun, R.; Pushin, D. A.; Shahi, C. B.; Weitfeldt, F. E.

    2016-03-01

    Neutron interferometry represents one of the most precise techniques for measuring the coherent scattering lengths (bc) of particular nuclear isotopes. Currently bc for helium-4 is known only to 1% relative uncertainty; a factor of ten higher than precision measurements of other light isotopes. Scattering lengths are measured using a neutron interferometer and by comparing the phase shift a neutron acquires as it passes through a gaseous sample relative to that of a neutron passing through vacuum. The density of the gas is determined by continuous monitoring of the sample's temperature and pressure. Challenges for these types of experiments include achieving the necessary long-term phase stability and accurate determination of the phase shift caused by the aluminum cell used to hold the gas; a phase shift many times greater than that of the sample. The present status on the effort to measure the n-4He scattering length at the NIST center for Neutron Research will be given. Financial support provided by the NSERC `Create' and `Discovery' programs, CERC, NIST and NSF Grant PHY-1205342.

  5. Topographical length scales of hierarchical superhydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Dhillon, P. K.; Brown, P. S.; Bain, C. D.; Badyal, J. P. S.; Sarkar, S.

    2014-10-01

    The morphology of hydrophobic CF4 plasma fluorinated polybutadiene surfaces has been characterised using atomic force microscopy (AFM). Judicious choice of the plasma power and exposure duration leads to formation of three different surface morphologies (Micro, Nano, and Micro + Nano). Scaling theory analysis shows that for all three surface topographies, there is an initial increase in roughness with length scale followed by a levelling-off to a saturation level. At length scales around 500 nm, it is found that the roughness is very similar for all three types of surfaces, and the saturation roughness value for the Micro + Nano morphology is found to be intermediate between those for the Micro and Nano surfaces. Fast Fourier Transform (FFT) analysis has shown that the Micro + Nano topography comprises a hierarchical superposition of Micro and Nano morphologies. Furthermore, the Micro + Nano surfaces display the highest local roughness (roughness exponent α = 0.42 for length scales shorter than ∼500 nm), which helps to explain their superhydrophobic behaviour (large water contact angle (>170°) and low hysteresis (<1°)).

  6. Step length estimation using handheld inertial sensors.

    PubMed

    Renaudin, Valérie; Susi, Melania; Lachapelle, Gérard

    2012-01-01

    In this paper a novel step length model using a handheld Micro Electrical Mechanical System (MEMS) is presented. It combines the user's step frequency and height with a set of three parameters for estimating step length. The model has been developed and trained using 12 different subjects: six men and six women. For reliable estimation of the step frequency with a handheld device, the frequency content of the handheld sensor's signal is extracted by applying the Short Time Fourier Transform (STFT) independently from the step detection process. The relationship between step and hand frequencies is analyzed for different hand's motions and sensor carrying modes. For this purpose, the frequency content of synchronized signals collected with two sensors placed in the hand and on the foot of a pedestrian has been extracted. Performance of the proposed step length model is assessed with several field tests involving 10 test subjects different from the above 12. The percentages of error over the travelled distance using universal parameters and a set of parameters calibrated for each subject are compared. The fitted solutions show an error between 2.5 and 5% of the travelled distance, which is comparable with that achieved by models proposed in the literature for body fixed sensors only.

  7. Flaw length distribution measurement in brittle materials

    NASA Astrophysics Data System (ADS)

    Rabinovitch, A.; Zlotnikov, R.; Bahat, D.

    2000-06-01

    A technique was developed in an experiment of fractured soda-lime glass which yielded a wide mist zone. This technique enabled us to measure the lengths of a large number (≈ 12 000) of secondary cracks (SC) in this zone. Statistical analysis of these lengths across the mist as a function of distance from fracture origin revealed some unexpected results. (a) The total number of SC remains constant for ≈ 40% of the width of the mist zone and subsequently decreases monotonically down to ≈ 50% of the initial number towards the end of this zone. (b) The length distribution of SC can be interpreted as constituting of two populations: (1) A nucleated SC population which remains unchanged across the whole mist. This distribution consists of two subpopulations: (i) half Gaussian (around 84%), and (ii) decaying exponential (around 16%). (2) A rather small portion of the nucleated population of SC (≈ 9% throughout) which grow monotonically by a factor of up to 40 with distance across the mist. (c) This factor agrees well with our theoretical calculation of the growth mode, and yields a reasonable estimate of the radius of the initial critical flaw (1.85 mm).

  8. [Cytoskeletal control of cell length regulation].

    PubMed

    Kharitonova, M A; Levina, C M; Rovenskii, I A

    2002-01-01

    It was shown that mouse embryo fibroblasts and human foreskin diploid fibroblasts of AGO 1523 line cultivated on specially prepared substrates with narrow (15 +/- 3 microns) linear adhesive strips were elongated and oriented along the strips, but the mean lengths of the fibroblasts of each type on the strips differed from those on the standard culture substrates. In contrast to the normal fibroblasts, the length of mouse embryonic fibroblasts with inactivated gene-suppresser Rb responsible for negative control of cell proliferation (MEF Rb-/-), ras-transformed mouse embryonic fibroblasts (MEF Rb-/-ras), or normal rat epitheliocytes of IAR2 line significantly exceeded those of the same cells on the standard culture substrates. The results of experiments with the drugs specifically affecting the cytoskeleton (colcemid and cytochalasin D) suggest that the constant mean length of normal fibroblasts is controlled by a dynamic equilibrium between two forces: centripetal tension of contractile actin-myosin microfilaments and centrifugal force generated by growing microtubules. This cytoskeletal mechanism is disturbed in MEF Rb-/- or MEF Rb-/-ras, probably, because of an impaired actin cytoskeleton and also in IAR2 epitheliocytes due to the different organization of the actin-myosin system in these cells, as compared to that in the fibroblasts. PMID:11862697

  9. Ultrasound velocities for axial eye length measurement.

    PubMed

    Hoffer, K J

    1994-09-01

    Since 1974, I have used individual sound velocities for each eye condition encountered for axial length measurement. The calculation results in 1,555 M/sec for the average phakic eye. A slower speed of 1,549 M/sec was found for an extremely long (30 mm) eye and a higher speed of 1,561 M/sec was noted for an extremely short (20 mm) eye. This inversely proportional velocity change can best be adjusted for by measuring the phakic eye at 1,532 M/sec and correcting the result by dividing the square of the measured axial length (AL1,532)2 by the difference of the measured axial length (AL1,532) minus 0.35 mm. A velocity of 1,534 M/sec was found for all aphakic eyes regardless of their length, and correction is clinically significant. The velocity of an eye containing a poly(methyl methacrylate) intraocular lens is not different from an average phakic eye but it does magnify the effect of axial length change. I recommend measuring the pseudophakic eye at 1,532 M/sec and adding to the result (AL1,532), + 0.04 + 44% of the IOL thickness. The speed for an eye with a silicone IOL was found to be 1,476 M/sec (or AL1,532 + 0.04 - 56% of IOL thickness) and for glass, 1,549 M/sec (or AL1,532 + 0.04 + 75% of IOL thickness). A speed of 1,139 M/sec was found for a phakic eye with silicone oil filling most of the vitreous cavity and 1,052 M/sec for an aphakic eye filled with oil. For varying volumes of oil, each eye should be calculated individually. The speed was 534 M/sec for phakic eyes filled with gas. Eyes containing a silicone IOL or oil or gas will create clinically significant errors (3 to 10 diopters) if the sound velocity is not corrected. PMID:7996413

  10. Transverse Acoustic Measurements of Superuid Helium-3 at Fixed and Variable Path Lengths

    NASA Astrophysics Data System (ADS)

    Collett, Charles Alward

    This thesis describes experiments using transverse zero sound in pure superfluid 3He to probe excitations with energies below the superfluid gap. One main focus is on a collective mode of the order parameter, the imaginary squashing mode. The splitting of this mode in a magnetic field causes acoustic birefringence, which rotates the polarization axis of the transverse sound wave. We have made precise measurements of this rotation in magnetic fields up to 0.11 T and observed the onset of nonlinear field dependence. Our measurements of the linear field dependence disagree with theoretical predictions, which led us to discover that the theory only applies when the sound frequency is close to the mode frequency, a condition not satisfied in our experiments. We extrapolated our data to the region of validity of the theory, and measured attractive sub-dominant f-wave pairing interactions. The other main focus is the construction of an experimental apparatus to enable in situ variation of the acoustic cavity spacing at low temperatures. Recent measurements have indicated a coupling between the transverse sound attenuation and surface Andreev bound states, which are predicted to be Majorana states in the specular scattering limit. A variable path length sample cell would enable measurements of the absolute attenuation of transverse sound, as well as allow for the separation of bulk effects from surface effects. It would also enable experiments looking for transverse zero sound in the normal state of 3He, which is predicted to have a high attenuation length requiring a micron-scale acoustic cavity. We have designed and implemented a diaphragm-based variable path length cell, and discuss our current progress and future prospects.

  11. Atmospheric effects on the performance and threshold extrapolation of multi-temporal Landsat derived dNBR for burn severity assessment

    NASA Astrophysics Data System (ADS)

    Fang, Lei; Yang, Jian

    2014-12-01

    The Landsat derived differenced Normalized Burn Ratio (dNBR) is widely used for burn severity assessments. Studies of regional wildfire trends in response to climate change require consistency in dNBR mapping across multiple image dates, which may vary in atmospheric condition. Conversion of continuous dNBR images into categorical burn severity maps often requires extrapolation of dNBR thresholds from present fires for which field severity measurements such as Composite Burn Index (CBI) data are available, to historical fires for which CBI data are typically unavailable. Although differential atmospheric effects between image collection dates could lead to biased estimates of historical burn severity patterns, little is known concerning the influence of atmospheric effects on dNBR performance and threshold extrapolation. In this study, we compared the performance of dNBR calculated from six atmospheric correction methods using an optimality approach. The six correction methods included one partial (Top of atmosphere reflectance, TOA), two absolute, and three relative methods. We assessed how the correction methods affected the CBI-dNBR correlation and burn severity mapping in a Chinese boreal forest fire which occurred in 2010. The dNBR thresholds of the 2010 fire for each of the correction methods were then extrapolated to classify a historical fire from 2000. Classification accuracies of threshold extrapolations were assessed based on Cohen's Kappa analysis with 73 field-based validation plots. Our study found most correction methods improved mean dNBR optimality of the two fires. The relative correction methods generated 32% higher optimality than both TOA and absolute correction methods. All the correction methods yielded high CBI-dNBR correlations (mean R2 = 0.847) but distinctly different dNBR thresholds for severity classification of 2010 fire. Absolute correction methods could substantially increase optimality score, but were insufficient to provide a

  12. Extrapolating surface structures to depth in transpressional systems: the role of rheology and convergence angle deduced from analogue experiments

    NASA Astrophysics Data System (ADS)

    Hsieh, Shang Yu; Neubauer, Franz; Cloetingh, Sierd; Willingshofer, Ernst; Sokoutis, Dimitrios

    2014-05-01

    The internal structure of major strike-slip faults is still poorly understood, particularly how the deep structure could be inferred from its surface expression (Molnar and Dayem, 2011 and references therein). Previous analogue experiments suggest that the convergence angle is the most influential factor (Leever et al., 2011). Further analogue modeling may allow a better understanding how to extrapolate surface structures to the subsurface geometry of strike-slip faults. Various scenarios of analogue experiments were designed to represent strike-slip faults in nature from different geological settings. As such key parameters, which are investigated in this study include: (a) the angle of convergence, (b) the thickness of brittle layer, (c) the influence of a rheological weak layer within the crust, and (d) influence of a thick and rheologically weak layer at the base of the crust. The latter aimed to simulate the effect of a hot metamorphic core complex or an alignment of uprising plutons bordered by a transtensional/transpressional strike-slip fault. The experiments are aimed to explain first order structures along major transcurrent strike-slip faults such as the Altyn, Kunlun, San Andrea and Greendale (Darfield earthquake 2010) faults. The preliminary results show that convergence angle significantly influences the overall geometry of the transpressive system with greater convergence angles resulting in wider fault zones and higher elevation. Different positions, densities and viscosities of weak rheological layers have not only different surface expressions but also affect the fault geometry in the subsurface. For instance, rheological weak material in the bottom layer results in stretching when experiment reaches a certain displacement and a buildup of a less segmented, wide positive flower structure. At the surface, a wide fault valley in the middle of the fault zone is the reflection of stretching along the velocity discontinuity at depth. In models with a

  13. Neutron chain length distributions in subcritical systems

    SciTech Connect

    Nolen, S.D.; Spriggs, G.

    1999-09-27

    In this paper, the authors present the results of the chain-length distribution as a function of k in subcritical systems. These results were obtained from a point Monte Carlo code and a three-dimensional Monte Carlo code, MC++. Based on these results, they then attempt to explain why several of the common neutron noise techniques, such as the Rossi-{alpha} and Feynman's variance-to-mean techniques, are difficult to perform in highly subcritical systems using low-efficiency detectors.

  14. Slip length measurement of gas flow.

    PubMed

    Maali, Abdelhamid; Colin, Stéphane; Bhushan, Bharat

    2016-09-16

    In this paper, we present a review of the most important techniques used to measure the slip length of gas flow on isothermal surfaces. First, we present the famous Millikan experiment and then the rotating cylinder and spinning rotor gauge methods. Then, we describe the gas flow rate experiment, which is the most widely used technique to probe a confined gas and measure the slip. Finally, we present a promising technique using an atomic force microscope introduced recently to study the behavior of nanoscale confined gas. PMID:27505860

  15. Long Length Contaminated Equipment Maintenance Plan

    SciTech Connect

    ESVELT, C.A.

    2000-02-01

    The purpose of this document is to provide the maintenance requirements of the Long Length Contaminated Equipment (LLCE) trailers and provide a basis for the maintenance frequencies selected. This document is applicable to the LLCE Receiver trailer and Transport trailer assembled by Mobilized Systems Inc. (MSI). Equipment used in conjunction with, or in support of, these trailers is not included. This document does not provide the maintenance requirements for checkout and startup of the equipment following the extended lay-up status which began in the mid 1990s. These requirements will be specified in other documentation.

  16. Slip length measurement of gas flow

    NASA Astrophysics Data System (ADS)

    Maali, Abdelhamid; Colin, Stéphane; Bhushan, Bharat

    2016-09-01

    In this paper, we present a review of the most important techniques used to measure the slip length of gas flow on isothermal surfaces. First, we present the famous Millikan experiment and then the rotating cylinder and spinning rotor gauge methods. Then, we describe the gas flow rate experiment, which is the most widely used technique to probe a confined gas and measure the slip. Finally, we present a promising technique using an atomic force microscope introduced recently to study the behavior of nanoscale confined gas.

  17. Slip length measurement of gas flow.

    PubMed

    Maali, Abdelhamid; Colin, Stéphane; Bhushan, Bharat

    2016-09-16

    In this paper, we present a review of the most important techniques used to measure the slip length of gas flow on isothermal surfaces. First, we present the famous Millikan experiment and then the rotating cylinder and spinning rotor gauge methods. Then, we describe the gas flow rate experiment, which is the most widely used technique to probe a confined gas and measure the slip. Finally, we present a promising technique using an atomic force microscope introduced recently to study the behavior of nanoscale confined gas.

  18. Length Scales in Bayesian Automatic Adaptive Quadrature

    NASA Astrophysics Data System (ADS)

    Adam, Gh.; Adam, S.

    2016-02-01

    Two conceptual developments in the Bayesian automatic adaptive quadrature approach to the numerical solution of one-dimensional Riemann integrals [Gh. Adam, S. Adam, Springer LNCS 7125, 1-16 (2012)] are reported. First, it is shown that the numerical quadrature which avoids the overcomputing and minimizes the hidden floating point loss of precision asks for the consideration of three classes of integration domain lengths endowed with specific quadrature sums: microscopic (trapezoidal rule), mesoscopic (Simpson rule), and macroscopic (quadrature sums of high algebraic degrees of precision). Second, sensitive diagnostic tools for the Bayesian inference on macroscopic ranges, coming from the use of Clenshaw-Curtis quadrature, are derived.

  19. Directly Addressable Variable-Length Codes

    NASA Astrophysics Data System (ADS)

    Brisaboa, Nieves R.; Ladra, Susana; Navarro, Gonzalo

    We introduce a symbol reordering technique that implicitly synchronizes variable-length codes, such that it is possible to directly access the i-th codeword without need of any sampling method. The technique is practical and has many applications to the representation of ordered sets, sparse bitmaps, partial sums, and compressed data structures for suffix trees, arrays, and inverted indexes, to name just a few. We show experimentally that the technique offers a competitive alternative to other data structures that handle this problem.

  20. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  1. 7 CFR 51.577 - Average midrib length.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Average midrib length. 51.577 Section 51.577... STANDARDS) United States Standards for Celery Definitions § 51.577 Average midrib length. Average midrib length means the average length of all the branches in the outer whorl measured from the point...

  2. The length distribution of frangible biofilaments

    NASA Astrophysics Data System (ADS)

    Michaels, Thomas C. T.; Yde, Pernille; Willis, Julian C. W.; Jensen, Mogens H.; Otzen, Daniel; Dobson, Christopher M.; Buell, Alexander K.; Knowles, Tuomas P. J.

    2015-10-01

    A number of different proteins possess the ability to polymerize into filamentous structures. Certain classes of such assemblies can have key functional roles in the cell, such as providing the structural basis for the cytoskeleton in the case of actin and tubulin, while others are implicated in the development of many pathological conditions, including Alzheimer's and Parkinson's diseases. In general, the fragmentation of such structures changes the total number of filament ends, which act as growth sites, and hence is a key feature of the dynamics of filamentous growth phenomena. In this paper, we present an analytical study of the master equation of breakable filament assembly and derive closed-form expressions for the time evolution of the filament length distribution for both open and closed systems with infinite and finite monomer supply, respectively. We use this theoretical framework to analyse experimental data for length distributions of insulin amyloid fibrils and show that our theory allows insights into the microscopic mechanisms of biofilament assembly to be obtained beyond those available from the conventional analysis of filament mass only.

  3. The length-scaling properties of topography

    NASA Technical Reports Server (NTRS)

    Weissel, Jeffrey K.; Pratson, Lincoln F.; Malinverno, Alberto

    1994-01-01

    The scaling properties of synthetic topographic surfaces and digital elevation models (DEMs) of topography are examined by analyzing their 'structure functions,' i.e., the qth order powers of the absolute elevation differences: delta h(sub q) (l) = E((absolute value of h(x + l) - h(x))(exp q)). We find that the relation delta h(sub 1 l) approximately equal cl(exp H) describes well the scaling behavior of natural topographic surfaces, as represented by DEMs gridded at 3 arc sec. Average values of the scaling exponent H between approximately 0.5 and 0.7 characterize DEMs from Ethiopia, Saudi Arabia, and Somalia over 3 orders of magnitude range in length scale l (approximately 0.1-150 km). Differences in appparent topographic roughness among the three areas most likely reflect differences in the amplitude factor c. Separate determination of scaling properties in the x and y coordinate directions allows us to assess whether scaling exponents are azimuthally dependent (anisotropic) or whether they are isotropic while the surface itself is anisotropic over a restricted range of length scale. We explore ways to determine whether topographic surfaces are characterized by simple or multiscaling properties.

  4. Cellular Mechanisms of Ciliary Length Control.

    PubMed

    Keeling, Jacob; Tsiokas, Leonidas; Maskey, Dipak

    2016-01-01

    Cilia and flagella are evolutionarily conserved, membrane-bound, microtubule-based organelles on the surface of most eukaryotic cells. They play important roles in coordinating a variety of signaling pathways during growth, development, cell mobility, and tissue homeostasis. Defects in ciliary structure or function are associated with multiple human disorders called ciliopathies. These diseases affect diverse tissues, including, but not limited to the eyes, kidneys, brain, and lungs. Many processes must be coordinated simultaneously in order to initiate ciliogenesis. These include cell cycle, vesicular trafficking, and axonemal extension. Centrioles play a central role in both cell cycle progression and ciliogenesis, making the transition between basal bodies and mitotic spindle organizers integral to both processes. The maturation of centrioles involves a functional shift from cell division toward cilium nucleation which takes place concurrently with its migration and fusion to the plasma membrane. Several proteinaceous structures of the distal appendages in mother centrioles are required for this docking process. Ciliary assembly and maintenance requires a precise balance between two indispensable processes; so called assembly and disassembly. The interplay between them determines the length of the resulting cilia. These processes require a highly conserved transport system to provide the necessary substances at the tips of the cilia and to recycle ciliary turnover products to the base using a based microtubule intraflagellar transport (IFT) system. In this review; we discuss the stages of ciliogenesis as well as mechanisms controlling the lengths of assembled cilia. PMID:26840332

  5. Cellular Mechanisms of Ciliary Length Control

    PubMed Central

    Keeling, Jacob; Tsiokas, Leonidas; Maskey, Dipak

    2016-01-01

    Cilia and flagella are evolutionarily conserved, membrane-bound, microtubule-based organelles on the surface of most eukaryotic cells. They play important roles in coordinating a variety of signaling pathways during growth, development, cell mobility, and tissue homeostasis. Defects in ciliary structure or function are associated with multiple human disorders called ciliopathies. These diseases affect diverse tissues, including, but not limited to the eyes, kidneys, brain, and lungs. Many processes must be coordinated simultaneously in order to initiate ciliogenesis. These include cell cycle, vesicular trafficking, and axonemal extension. Centrioles play a central role in both cell cycle progression and ciliogenesis, making the transition between basal bodies and mitotic spindle organizers integral to both processes. The maturation of centrioles involves a functional shift from cell division toward cilium nucleation which takes place concurrently with its migration and fusion to the plasma membrane. Several proteinaceous structures of the distal appendages in mother centrioles are required for this docking process. Ciliary assembly and maintenance requires a precise balance between two indispensable processes; so called assembly and disassembly. The interplay between them determines the length of the resulting cilia. These processes require a highly conserved transport system to provide the necessary substances at the tips of the cilia and to recycle ciliary turnover products to the base using a based microtubule intraflagellar transport (IFT) system. In this review; we discuss the stages of ciliogenesis as well as mechanisms controlling the lengths of assembled cilia. PMID:26840332

  6. The length distribution of frangible biofilaments.

    PubMed

    Michaels, Thomas C T; Yde, Pernille; Willis, Julian C W; Jensen, Mogens H; Otzen, Daniel; Dobson, Christopher M; Buell, Alexander K; Knowles, Tuomas P J

    2015-10-28

    A number of different proteins possess the ability to polymerize into filamentous structures. Certain classes of such assemblies can have key functional roles in the cell, such as providing the structural basis for the cytoskeleton in the case of actin and tubulin, while others are implicated in the development of many pathological conditions, including Alzheimer's and Parkinson's diseases. In general, the fragmentation of such structures changes the total number of filament ends, which act as growth sites, and hence is a key feature of the dynamics of filamentous growth phenomena. In this paper, we present an analytical study of the master equation of breakable filament assembly and derive closed-form expressions for the time evolution of the filament length distribution for both open and closed systems with infinite and finite monomer supply, respectively. We use this theoretical framework to analyse experimental data for length distributions of insulin amyloid fibrils and show that our theory allows insights into the microscopic mechanisms of biofilament assembly to be obtained beyond those available from the conventional analysis of filament mass only.

  7. Entrance Length and Turbulence Transition in Microflows

    NASA Astrophysics Data System (ADS)

    Wereley, Steve; Lee, Sangyoup; Gui, Lichuan

    2002-11-01

    Since microfabrication techniques are typically planar processes, microchannel flows typically have significant predevelopment due to the upstream reservoir having the same height as the microchannel. The main concerns of the current study are categorized into finding the effects of typical microchannel geometry on the velocity entrance length in the laminar flow regime and providing the turbulence transitional Reynolds number range using the details of the velocity profile rather than global measurements of pressure drop. A rectangular micro-channel of aspect ratio 2.65 and the hydraulic diameter 380 um was used in this study. Micro particle image velocimetry measurement was performed to measure the velocity profiles. The entrance length is found to be reduced by about 45number occurs between 2100 and 2900-comparable to macroscale observations. Finally a new technique is proposed to measure the turbulence intensity of a flow directly from the PIV correlation function peak width. This new technique provides results comparable to traditional means of calculating turbulence intensity but is particularly useful in measuring microflows where the seeding density can be very low.

  8. Venus Length-of-Day Variations

    NASA Astrophysics Data System (ADS)

    Margot, Jean-Luc; Campbell, D. B.; Peale, S. J.; Ghigo, F. D.

    2012-10-01

    Since 2004 we have been monitoring the instantaneous spin state of Venus with the goals of measuring the precession of the rotation axis and of quantifying daily, seasonal, and secular changes in length-of-day. We use the Goldstone and Green Bank Telescopes for these observations. The spin period of Venus is thought to be set by a delicate balance between solid-body tides and atmospheric torques that must vary as insolation and orbital parameters change [Bills 2005]. Our measurements to date reveal length-of-day (LOD) variations of 50 ppm. None of the models can be reconciled with the Magellan 500-day-average spin period of 243.0185 +/- 0.0001 days [Davies et al 1992], nor with a 16-year-average estimate of 243.023 +/- 0.002 days [Mueller et al 2012], nor with any other constant spin period. With our nominal solution we can rule out a constant spin period with over 99.9% confidence. When allowances are made for uncertainties in spin axis orientation and instantaneous spin measurement epochs, the confidence is reduced but remains higher than 99%. We attribute the LOD variations primarily to angular momentum exchange between the atmosphere and solid planet. Because there are so few constraints on the internal dynamical structure of the Venusian atmosphere, a time history of atmospheric angular momentum changes can be used to address questions related to the dynamics of the atmosphere, including its super-rotation, and climatic variations.

  9. The probabilistic distribution of metal whisker lengths

    SciTech Connect

    Niraula, D. Karpov, V. G.

    2015-11-28

    Significant reliability concerns in multiple industries are related to metal whiskers, which are random high aspect ratio filaments growing on metal surfaces and causing shorts in electronic packages. We derive a closed form expression for the probabilistic distribution of metal whisker lengths. Our consideration is based on the electrostatic theory of metal whiskers, according to which whisker growth is interrupted when its tip enters a random local “dead region” of a weak electric field. Here, we use the approximation neglecting the possibility of thermally activated escapes from the “dead regions,” which is later justified. We predict a one-parameter distribution with a peak at a length that depends on the metal surface charge density and surface tension. In the intermediate range, it fits well the log-normal distribution used in the experimental studies, although it decays more rapidly in the range of very long whiskers. In addition, our theory quantitatively explains how the typical whisker concentration is much lower than that of surface grains. Finally, it predicts the stop-and-go phenomenon for some of the whiskers growth.

  10. Residential NOx exposure in a 35-year cohort study. Changes of exposure, and comparison with back extrapolation for historical exposure assessment

    NASA Astrophysics Data System (ADS)

    Molnár, Peter; Stockfelt, Leo; Barregard, Lars; Sallsten, Gerd

    2015-08-01

    In this study we aimed to investigate the effects on historical NOx estimates on time trends, spatial distributions, exposure contrasts, the effect of relocation patterns and the effects of back extrapolation. Historical levels of nitrogen oxides (NOx) from 1975 to 2009 were modeled with high resolution in Gothenburg, Sweden, using historical emission databases and Gaussian models. Yearly historical addresses were collected and geocoded from a population-based cohort of Swedish men from 1973 to 2007, with a total of 160,568 address years. Of these addresses, 146,675 (91%) were within our modeled area and assigned a NOx level. NOx levels decreased substantially from a maximum median level of 43.9 μg m-3 in 1983 to 16.6 μg m-3 in 2007, mainly due to lower emissions per vehicle km. There was a considerable variability in concentrations within the cohort, with a ratio of 3.5 between the means in the highest and lowest quartile. About 50% of the participants changed residential address during the study, but the mean NOx exposure was not affected. About half of these moves resulted in a positive or negative change in NOx exposure of >10 μg m-3, and thus changed the exposure substantially. Back extrapolation of NOx levels using the time trend of a background monitoring station worked well for 5-7 years back in time, but extrapolation more than ten years back in time resulted in substantial scattering compared to the "true" dispersion models for the corresponding years. These findings are important to take into account since accurate exposure estimates are essential in long term epidemiological studies of health effects of air pollution.

  11. Molecular target sequence similarity as a basis for species extrapolation to assess the ecological risk of chemicals with known modes of action.

    PubMed

    Lalone, Carlie A; Villeneuve, Daniel L; Burgoon, Lyle D; Russom, Christine L; Helgen, Henry W; Berninger, Jason P; Tietge, Joseph E; Severson, Megan N; Cavallin, Jenna E; Ankley, Gerald T

    2013-11-15

    It is not feasible to conduct toxicity tests with all species that may be impacted by chemical exposures. Therefore, cross-species extrapolation is fundamental to environmental risk assessment. Recognition of the impracticality of generating empirical, whole organism, toxicity data for the extensive universe of chemicals in commerce has been an impetus driving the field of predictive toxicology. We describe a strategy that leverages expanding databases of molecular sequence information together with identification of specific molecular chemical targets whose perturbation can lead to adverse outcomes to support predictive species extrapolation. This approach can be used to predict which species may be more (or less) susceptible to effects following exposure to chemicals with known modes of action (e.g., pharmaceuticals, pesticides). Primary amino acid sequence alignments are combined with more detailed analyses of conserved functional domains to derive the predictions. This methodology employs bioinformatic approaches to automate, collate, and calculate quantitative metrics associated with cross-species sequence similarity of key molecular initiating events (MIEs). Case examples focused on the actions of (a) 17α-ethinyl estradiol on the human (Homo sapiens) estrogen receptor; (b) permethrin on the mosquito (Aedes aegypti) voltage-gated para-like sodium channel; and (c) 17β-trenbolone on the bovine (Bos taurus) androgen receptor are presented to demonstrate the potential predictive utility of this species extrapolation strategy. The examples compare empirical toxicity data to cross-species predictions of intrinsic susceptibility based on analyses of sequence similarity relevant to the MIEs of defined adverse outcome pathways. Through further refinement, and definition of appropriate domains of applicability, we envision practical and routine utility for the molecular target similarity-based predictive method in chemical risk assessment, particularly where testing

  12. Effects of inelastic radiative processes on the determination of water-leaving spectral radiance from extrapolation of underwater near-surface measurements.

    PubMed

    Li, Linhai; Stramski, Dariusz; Reynolds, Rick A

    2016-09-01

    Extrapolation of near-surface underwater measurements is the most common method to estimate the water-leaving spectral radiance, Lw(λ) (where λ is the light wavelength in vacuum), and remote-sensing reflectance, Rrs(λ), for validation and vicarious calibration of satellite sensors, as well as for ocean color algorithm development. However, uncertainties in Lw(λ) arising from the extrapolation process have not been investigated in detail with regards to the potential influence of inelastic radiative processes, such as Raman scattering by water molecules and fluorescence by colored dissolved organic matter and chlorophyll-a. Using radiative transfer simulations, we examine high-depth resolution vertical profiles of the upwelling radiance, Lu(λ), and its diffuse attenuation coefficient, KLu (λ), within the top 10 m of the ocean surface layer and assess the uncertainties in extrapolated values of Lw(λ). The inelastic processes generally increase Lu and decrease KLu in the red and near-infrared (NIR) portion of the spectrum. Unlike KLu in the blue and green spectral bands, KLu in the red and NIR is strongly variable within the near-surface layer even in a perfectly homogeneous water column. The assumption of a constant KLu with depth that is typically employed in the extrapolation method can lead to significant errors in the estimate of Lw. These errors approach ∼100% at 900 nm, and the desired threshold of 5% accuracy or less cannot be achieved at wavelengths greater than 650 nm for underwater radiometric systems that typically take measurements at depths below 1 m. These errors can be reduced by measuring Lu within a much shallower surface layer of tens of centimeters thick or even less at near-infrared wavelengths longer than 800 nm, which suggests a

  13. Critical length scale controls adhesive wear mechanisms.

    PubMed

    Aghababaei, Ramin; Warner, Derek H; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  14. Optimal Length of Low Reynolds Number Nanopropellers.

    PubMed

    Walker, D; Kübler, M; Morozov, K I; Fischer, P; Leshansky, A M

    2015-07-01

    Locomotion in fluids at the nanoscale is dominated by viscous drag. One efficient propulsion scheme is to use a weak rotating magnetic field that drives a chiral object. From bacterial flagella to artificial drills, the corkscrew is a universally useful chiral shape for propulsion in viscous environments. Externally powered magnetic micro- and nanomotors have been recently developed that allow for precise fuel-free propulsion in complex media. Here, we combine analytical and numerical theory with experiments on nanostructured screw-propellers to show that the optimal length is surprisingly short-only about one helical turn, which is shorter than most of the structures in use to date. The results have important implications for the design of artificial actuated nano- and micropropellers and can dramatically reduce fabrication times, while ensuring optimal performance.

  15. Tolerance at arm's length: the Dutch experience.

    PubMed

    Schuijer, J

    1990-01-01

    With respect to pedophilia and the age of consent, the Netherlands warrants special attention. Although pedophilia is not as widely accepted in the Netherlands as sometimes is supposed, developments in the judicial practice showed a growing reservedness. These developments are a spin-off of related developments in Dutch society. The tolerance in the Dutch society has roots that go far back in history and is also a consequence of the way this society is structured. The social changes of the sixties and seventies resulted in a "tolerance at arm's length" for pedophiles, which proved to be deceptive when the Dutch government proposed to lower the age of consent in 1985. It resulted in a vehement public outcry. The prevailing sex laws have been the prime target of protagonists of pedophile emancipation. Around 1960, organized as a group, they started to undertake several activities. In the course of their existence, they came to redefine the issue of pedophilia as one of youth emancipation.

  16. Quantum criticality with two length scales.

    PubMed

    Shao, Hui; Guo, Wenan; Sandvik, Anders W

    2016-04-01

    The theory of deconfined quantum critical (DQC) points describes phase transitions at absolute temperature T = 0 outside the standard paradigm, predicting continuous transformations between certain ordered states where conventional theory would require discontinuities. Numerous computer simulations have offered no proof of such transitions, instead finding deviations from expected scaling relations that neither were predicted by the DQC theory nor conform to standard scenarios. Here we show that this enigma can be resolved by introducing a critical scaling form with two divergent length scales. Simulations of a quantum magnet with antiferromagnetic and dimerized ground states confirm the form, proving a continuous transition with deconfined excitations and also explaining anomalous scaling at T > 0. Our findings revise prevailing paradigms for quantum criticality, with potential implications for many strongly correlated materials. PMID:26989196

  17. Quantum criticality with two length scales

    NASA Astrophysics Data System (ADS)

    Shao, Hui; Guo, Wenan; Sandvik, Anders W.

    2016-04-01

    The theory of deconfined quantum critical (DQC) points describes phase transitions at absolute temperature T = 0 outside the standard paradigm, predicting continuous transformations between certain ordered states where conventional theory would require discontinuities. Numerous computer simulations have offered no proof of such transitions, instead finding deviations from expected scaling relations that neither were predicted by the DQC theory nor conform to standard scenarios. Here we show that this enigma can be resolved by introducing a critical scaling form with two divergent length scales. Simulations of a quantum magnet with antiferromagnetic and dimerized ground states confirm the form, proving a continuous transition with deconfined excitations and also explaining anomalous scaling at T > 0. Our findings revise prevailing paradigms for quantum criticality, with potential implications for many strongly correlated materials.

  18. Quark ensembles with the infinite correlation length

    SciTech Connect

    Zinov’ev, G. M.; Molodtsov, S. V.

    2015-01-15

    A number of exactly integrable (quark) models of quantum field theory with the infinite correlation length have been considered. It has been shown that the standard vacuum quark ensemble—Dirac sea (in the case of the space-time dimension higher than three)—is unstable because of the strong degeneracy of a state, which is due to the character of the energy distribution. When the momentum cutoff parameter tends to infinity, the distribution becomes infinitely narrow, leading to large (unlimited) fluctuations. Various vacuum ensembles—Dirac sea, neutral ensemble, color superconductor, and BCS state—have been compared. In the case of the color interaction between quarks, the BCS state has been certainly chosen as the ground state of the quark ensemble.

  19. Critical length scale controls adhesive wear mechanisms

    PubMed Central

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-01-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270

  20. Critical length scale controls adhesive wear mechanisms

    NASA Astrophysics Data System (ADS)

    Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois

    2016-06-01

    The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients.

  1. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  2. Box codes of lengths 48 and 72

    NASA Astrophysics Data System (ADS)

    Solomon, G.; Jin, Y.

    1993-11-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  3. Telomerase and telomere length in pulmonary fibrosis.

    PubMed

    Liu, Tianju; Ullenbruch, Matthew; Young Choi, Yoon; Yu, Hongfeng; Ding, Lin; Xaubet, Antoni; Pereda, Javier; Feghali-Bostwick, Carol A; Bitterman, Peter B; Henke, Craig A; Pardo, Annie; Selman, Moises; Phan, Sem H

    2013-08-01

    In addition to its expression in stem cells and many cancers, telomerase activity is transiently induced in murine bleomycin (BLM)-induced pulmonary fibrosis with increased levels of telomerase transcriptase (TERT) expression, which is essential for fibrosis. To extend these observations to human chronic fibrotic lung disease, we investigated the expression of telomerase activity in lung fibroblasts from patients with interstitial lung diseases (ILDs), including idiopathic pulmonary fibrosis (IPF). The results showed that telomerase activity was induced in more than 66% of IPF lung fibroblast samples, in comparison with less than 29% from control samples, some of which were obtained from lung cancer resections. Less than 4% of the human IPF lung fibroblast samples exhibited shortened telomeres, whereas less than 6% of peripheral blood leukocyte samples from patients with IPF or hypersensitivity pneumonitis demonstrated shortened telomeres. Moreover, shortened telomeres in late-generation telomerase RNA component knockout mice did not exert a significant effect on BLM-induced pulmonary fibrosis. In contrast, TERT knockout mice exhibited deficient fibrosis that was independent of telomere length. Finally, TERT expression was up-regulated by a histone deacetylase inhibitor, while the induction of TERT in lung fibroblasts was associated with the binding of acetylated histone H3K9 to the TERT promoter region. These findings indicate that significant telomerase induction was evident in fibroblasts from fibrotic murine lungs and a majority of IPF lung samples, whereas telomere shortening was not a common finding in the human blood and lung fibroblast samples. Notably, the animal studies indicated that the pathogenesis of pulmonary fibrosis was independent of telomere length.

  4. Meson-Baryon Scattering Lengths from Mixed-Action Lattice QCD

    SciTech Connect

    Beane, S; Detmold, W; Luu, T; Orginos, K; Parreno, A; Torok, A; Walker-Loud, A

    2009-06-30

    The {pi}{sup +}{Sigma}{sup +}, {pi}{sup +}{Xi}{sup 0}, K{sup +}p, K{sup +}n, {bar K}{sup 0}{Sigma}{sup +}, and {bar K}{sup 0}{Xi}{sup 0} scattering lengths are calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks is used to perform the chiral extrapolations. We find no convergence for the kaon-baryon processes in the three-flavor chiral expansion. Using the two-flavor chiral expansion, we find a{sub {pi}{sup +}{Sigma}{sup +}} = -0.197 {+-} 0.017 fm, and a{sub {pi}{sup +}{Xi}{sup 0}} = -0.098 {+-} 0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.

  5. Internode length in pisum: do the internode length genes effect growth in dark-grown plants?

    PubMed

    Reid, J B

    1983-07-01

    Internode length in light-grown peas (Pisum sativum L.) is controlled by the interaction of genes occupying at least five major loci, Le, La, Cry, Na, and Lm. The present work shows that the genes at all of the loci examined (Le, Cry, and Na) also exert an effect on internode length in plants grown in complete darkness. Preliminary results using pure lines were verified using either segregating progenies or near isogenic lines. The major cause of the differences was due to a change in the number of cells per internode rather than to an alteration of the cell length. Since the genes occupying at least two of these loci, Le and Na, have been shown to be directly involved with gibberellin metabolism, it appears that gibberellins are not only essential for elongation in the dark but are limiting for elongation in the nana (extremely short, na), dwarf (Na le), and tall (Na Le) phenotypes. These results are supported by the large inhibitory effects of AMO 1618 treatments on stem elongation in dwarf and tall lines grown in the dark and the fact that applied gibberellic acid could overcome this inhibition and greatly promote elongation in a gibberellin-deficient na line. It is clear that the internode length genes, and in particular the alleles at the Le locus, are not acting by simply controlling the sensitivity of the plant to light. PMID:16663081

  6. Numerical analysis of scalar dissipation length-scales and their scaling properties

    NASA Astrophysics Data System (ADS)

    Vaishnavi, Pankaj; Kronenburg, Andreas

    2006-11-01

    Scalar dissipation rate, χ, is fundamental to the description of scalar-mixing in turbulent non-premixed combustion. Most contributions to the statistics for χ come from the finest turbulent mixing-scales and thus its adequate characterisation requires good resolution. Reliable χ-measurement is complicated by the trade-off between higher resolution and greater signal-to-noise ratio. Thus, the present numerical study utilises the error-free mixture fraction, Z, and fluid mechanical data from the turbulent reacting jet DNS of Pantano (2004). The aim is to quantify the resolution requirements for χ-measurement in terms of easily measurable properties of the flow like the integral-scale Reynolds number, Reδ, using spectral and spatial-filtering [cf. Barlow and Karpetis (2005)] analyses. Analysis of the 1-D cross-stream dissipation spectra enables the estimation of the dissipation length scales. It is shown that these spectrally-computed scales follow the expected Kolmogorov scaling with Reδ-0.75 . The work also involves local smoothening of the instantaneous χ-field over a non-overlapping spatial-interval (filter-width, wf), to study the smoothened χ-value as a function of wf, as wf is extrapolated to the smallest scale of interest. The dissipation length-scales thus captured show a stringent Reδ-1 scaling, compared to the usual Kolmogorov-type. This concurs with the criterion of 'resolution adequacy' of the DNS, as set out by Sreenivasan (2004) using the theory of multi-fractals.

  7. Effects of anisosmotic stress on cardiac muscle cell length, diameter, area, and sarcomere length

    NASA Technical Reports Server (NTRS)

    Tanaka, R.; Barnes, M. A.; Cooper, G. 4th; Zile, M. R.

    1996-01-01

    The purpose of this study was to examine the effects of anisosmotic stress on adult mammalian cardiac muscle cell (cardiocyte) size. Cardiocyte size and sarcomere length were measured in cardiocytes isolated from 10 normal rats and 10 normal cats. Superfusate osmolarity was decreased from 300 +/- 6 to 130 +/- 5 mosM and increased to 630 +/- 8 mosM. Cardiocyte size and sarcomere length increased progressively when osmolarity was decreased, and there were no significant differences between cat and rat cardiocytes with respect to percent change in cardiocyte area or diameter; however, there were significant differences in cardiocyte length (2.8 +/- 0.3% in cat vs. 6.1 +/- 0.3% in rat, P < 0.05) and sarcomere length (3.3 +/- 0.3% in cat vs. 6.1 +/- 0.3% in rat, P < 0.05). To determine whether these species-dependent differences in length were related to diastolic interaction of the contractile elements or differences in relative passive stiffness, cardiocytes were subjected to the osmolarity gradient 1) during treatment with 7 mM 2,3-butanedione monoxime (BDM), which inhibits cross-bridge interaction, or 2) after pretreatment with 1 mM ethylene glycol-bis(beta-aminoethyl ether)-N, N,N',N'-tetraacetic acid (EGTA), a bivalent Ca2+ chelator. Treatment with EGTA or BDM abolished the differences between cat and rat cardiocytes. Species-dependent differences therefore appeared to be related to the degree of diastolic cross-bridge association and not differences in relative passive stiffness. In conclusion, the osmolarity vs. cell size relation is useful in assessing the cardiocyte response to anisosmotic stress and may in future studies be useful in assessing changes in relative passive cardiocyte stiffness produced by pathological processes.

  8. Calculation of extrapolation curves in the 4π(LS)β-γ coincidence technique with the Monte Carlo code Geant4.

    PubMed

    Bobin, C; Thiam, C; Bouchard, J

    2016-03-01

    At LNE-LNHB, a liquid scintillation (LS) detection setup designed for Triple to Double Coincidence Ratio (TDCR) measurements is also used in the β-channel of a 4π(LS)β-γ coincidence system. This LS counter based on 3 photomultipliers was first modeled using the Monte Carlo code Geant4 to enable the simulation of optical photons produced by scintillation and Cerenkov effects. This stochastic modeling was especially designed for the calculation of double and triple coincidences between photomultipliers in TDCR measurements. In the present paper, this TDCR-Geant4 model is extended to 4π(LS)β-γ coincidence counting to enable the simulation of the efficiency-extrapolation technique by the addition of a γ-channel. This simulation tool aims at the prediction of systematic biases in activity determination due to eventual non-linearity of efficiency-extrapolation curves. First results are described in the case of the standardization (59)Fe. The variation of the γ-efficiency in the β-channel due to the Cerenkov emission is investigated in the case of the activity measurements of (54)Mn. The problem of the non-linearity between β-efficiencies is featured in the case of the efficiency tracing technique for the activity measurements of (14)C using (60)Co as a tracer.

  9. Measured Copper Toxicity to Cnesterodon decemmaculatus (Pisces: Poeciliidae) and Predicted by Biotic Ligand Model in Pilcomayo River Water: A Step for a Cross-Fish-Species Extrapolation

    PubMed Central

    Casares, María Victoria; de Cabo, Laura I.; Seoane, Rafael S.; Natale, Oscar E.; Castro Ríos, Milagros; Weigandt, Cristian; de Iorio, Alicia F.

    2012-01-01

    In order to determine copper toxicity (LC50) to a local species (Cnesterodon decemmaculatus) in the South American Pilcomayo River water and evaluate a cross-fish-species extrapolation of Biotic Ligand Model, a 96 h acute copper toxicity test was performed. The dissolved copper concentrations tested were 0.05, 0.19, 0.39, 0.61, 0.73, 1.01, and 1.42 mg Cu L−1. The 96 h Cu LC50 calculated was 0.655 mg L−1 (0.823 − 0.488). 96-h Cu LC50 predicted by BLM for Pimephales promelas was 0.722 mg L−1. Analysis of the inter-seasonal variation of the main water quality parameters indicates that a higher protective effect of calcium, magnesium, sodium, sulphate, and chloride is expected during the dry season. The very high load of total suspended solids in this river might be a key factor in determining copper distribution between solid and solution phases. A cross-fish-species extrapolation of copper BLM is valid within the water quality parameters and experimental conditions of this toxicity test. PMID:22523491

  10. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    SciTech Connect

    Kadoura, Ahmad; Sun, Shuyu Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  11. Human urine and plasma concentrations of bisphenol A extrapolated from pharmacokinetics established in in vivo experiments with chimeric mice with humanized liver and semi-physiological pharmacokinetic modeling.

    PubMed

    Miyaguchi, Takamori; Suemizu, Hiroshi; Shimizu, Makiko; Shida, Satomi; Nishiyama, Sayako; Takano, Ryohji; Murayama, Norie; Yamazaki, Hiroshi

    2015-06-01

    The aim of this study was to extrapolate to humans the pharmacokinetics of estrogen analog bisphenol A determined in chimeric mice transplanted with human hepatocytes. Higher plasma concentrations and urinary excretions of bisphenol A glucuronide (a primary metabolite of bisphenol A) were observed in chimeric mice than in control mice after oral administrations, presumably because of enterohepatic circulation of bisphenol A glucuronide in control mice. Bisphenol A glucuronidation was faster in mouse liver microsomes than in human liver microsomes. These findings suggest a predominantly urinary excretion route of bisphenol A glucuronide in chimeric mice with humanized liver. Reported human plasma and urine data for bisphenol A glucuronide after single oral administration of 0.1mg/kg bisphenol A were reasonably estimated using the current semi-physiological pharmacokinetic model extrapolated from humanized mice data using algometric scaling. The reported geometric mean urinary bisphenol A concentration in the U.S. population of 2.64μg/L underwent reverse dosimetry modeling with the current human semi-physiological pharmacokinetic model. This yielded an estimated exposure of 0.024μg/kg/day, which was less than the daily tolerable intake of bisphenol A (50μg/kg/day), implying little risk to humans. Semi-physiological pharmacokinetic modeling will likely prove useful for determining the species-dependent toxicological risk of bisphenol A.

  12. Electrical impedance measurements of root canal length.

    PubMed

    Meredith, N; Gulabivala, K

    1997-06-01

    Electronic methods are now widely used during endodontic treatment for the assessment of root canal length. These commonly measure the electrical resistance or impedance between the root canal and the buccal mucosa. A number of studies have been undertaken to determine the accuracy of commercially available instruments. The aims of this investigation were to determine the electrical impedance characteristics of the root canal and periapical tissues in vivo, measure the changes relative to the distance of an endodontic instrument from the apical constriction and propose an equivalent circuit modelling the periapical tissues. The length of the root canals of 20 previously untreated teeth were determined using radiographic and electronic methods. Minimal canal preparation was carried out and measurements were made with a size 10 K-Flex file. A microprocessor-controlled LCR analyser was used to measure the electrical impedance characteristics of each root canal. The instrument measured the series and parallel resistive (RS, RP) and capacitance (CS, CP) component of the tissues at two test frequencies, 100 Hz and 1 kHz. Measurements were made for each root canal when the diagnostic file was placed at the apical constriction and repeated when the file was withdrawn to -0.5, -1.0, -1.5, -2.0 and -5.0 mm from the foramen. Readings were taken for each canal after the canal had been dried with paper points, and flooded first with deionised water and then with sodium hypochlorite. The root canals were then prepared, cleaned and obturated using standard endodontic procedures. The LCR analyser selected the series resistance component as the major measurement parameter. There was a clear increase in series resistance (RS) with increasing distance from the radiographic apex for dry canals and those containing deionised water and sodium hypochlorite. The mean resistance for dry canals was markedly higher than for those containing fluid, ranging from 22.19 k omega to 92.07 k omega

  13. Constrained length minimum inductance gradient coil design.

    PubMed

    Chronik, B A; Rutt, B K

    1998-02-01

    A gradient coil design algorithm capable of controlling the position of the homogeneous region of interest (ROI) with respect to the current-carrying wires is required for many advanced imaging and spectroscopy applications. A modified minimum inductance target field method that allows the placement of a set of constraints on the final current density is presented. This constrained current minimum inductance method is derived in the context of previous target field methods. Complete details are shown and all equations required for implementation of the algorithm are given. The method has been implemented on computer and applied to the design of both a 1:1 aspect ratio (length:diameter) central ROI and a 2:1 aspect ratio edge ROI gradient coil. The 1:1 design demonstrates that a general analytic method can be used to easily obtain very short gradient coil designs for use with specialized magnet systems. The edge gradient design demonstrates that designs that allow imaging of the neck region with a head sized gradient coil can be obtained, as well as other applications requiring edge-of-cylinder regions of uniformity.

  14. Full length prototype SSC dipole test results

    SciTech Connect

    Strait, J.; Brown, B.C.; Carson, J.; Engler, N.; Fisk, H.E.; Hanft, R.; Koepke, K.; Kuchnir, M.; Larson, E.; Lundy, R.

    1987-04-24

    Results are presented from tests of the first full length prototype SSC dipole magnet. The cryogenic behavior of the magnet during a slow cooldown to 4.5K and a slow warmup to room temperature has been measured. Magnetic field quality was measured at currents up to 2000 A. Averaged over the body field all harmonics with the exception of b/sub 2/ and b/sub 8/ are at or within the tolerances specified by the SSC Central Design Group. (The values of b/sub 2/ and b/sub 8/ result from known design and construction defects which will be be corrected in later magnets.) Using an NMR probe the average body field strength is measured to be 10.283 G/A with point to point variations on the order of one part in 1000. Data are presented on quench behavior of the magnet up to 3500 A (approximately 55% of full field) including longitudinal and transverse velocities for the first 250 msec of the quench.

  15. Reconstruction of femoral length from fragmentary femora

    PubMed Central

    Offei, Eric Bekoe; Osabutey, Casmiel Kwabena

    2016-01-01

    The reconstruction of femoral length (FL) from fragmentary femora is an essential step in estimating stature from fragmentary skeletal remains in forensic investigations. While regression formulae for doing this have been suggested for several populations, such formulae have not been established for Ghanaian skeletal remains. This study, therefore, seeks to derive regression formulae for reconstruction of FL from fragmentary femora of skeletal samples obtained from Ghana. Six measurements (vertical head diameter, transverse head diameter, bicondylar breadth, epicondylar breadth, sub-trochanteric anterior-posterior diameter, and sub-trochanteric transverse diameter) were acquired from different anatomical portions of the femur and the relationship between each acquired measurement and FL was analyzed using linear regression. The results indicated significantly moderate-to-high correlations (r=0.580–0.818) between FL and each acquired measurement. The error estimates of the regression formulae were relatively low (i.e., standard error of estimate, 13.66–19.28 mm), suggesting that the discrepancies between actual and estimated stature were relatively low. Compared with other measurements, sub-trochanteric transverse diameter was the best estimate of FL. In the absence of a complete femur, the regression formulae based on the assessed measurements may be used to infer FL, from which stature can be estimated in forensic investigations. PMID:27722014

  16. Titan's Length-of-Day Variations

    NASA Astrophysics Data System (ADS)

    Folonier, Hugo Alberto; Ferraz-Mello, Sylvio

    2015-11-01

    The Cassini radar observation of Titan over several years show that the rotation period is slightly faster than the synchronous motion (Lorenz et al. 2008; Stiles et al. 2008 and 2011; Meriggiola 2012). The seasonal variation in the mean and zonal wind speed and direction in Titan’s lower troposphere causes the exchange of a substantial amount of angular momentum between the surface and the atmosphere (Tokano and Neubauer, 2005; Richard et al. 2014). The rotation variation is affected by the influence of the atmosphere when we assume that Titan is a differentiated body and the atmosphere interacts only with the outer layer.In this work, we calculate variations of Titan’s length-of-day when the body is formed by two independent rotating parts and assuming that friction occurs at the interface of them. The tides are considered using the extension of two different theories -- the Darwin tide theory and Ferraz-Mello’s creep tide theory -- to the case of one body formed by two homogeneous parts. The results are compared and their differences are discussed.

  17. Length-Limited Data Transformation and Compression

    SciTech Connect

    Senecal, Joshua G.

    2005-09-01

    Scientific computation is used for the simulation of increasingly complex phenomena, and generates data sets of ever increasing size, often on the order of terabytes. All of this data creates difficulties. Several problems that have been identified are (1) the inability to effectively handle the massive amounts of data created, (2) the inability to get the data off the computer and into storage fast enough, and (3) the inability of a remote user to easily obtain a rendered image of the data resulting from a simulation run. This dissertation presents several techniques that were developed to address these issues. The first is a prototype bin coder based on variable-to-variable length codes. The codes utilized are created through a process of parse tree leaf merging, rather than the common practice of leaf extension. This coder is very fast and its compression efficiency is comparable to other state-of-the-art coders. The second contribution is the Piecewise-Linear Haar (PLHaar) transform, a reversible n-bit to n-bit wavelet-like transform. PLHaar is simple to implement, ideal for environments where transform coefficients must be kept the same size as the original data, and is the only n-bit to n-bit transform suitable for both lossy and lossless coding.

  18. Gap Filling as Exact Path Length Problem.

    PubMed

    Salmela, Leena; Sahlin, Kristoffer; Mäkinen, Veli; Tomescu, Alexandru I

    2016-05-01

    One of the last steps in a genome assembly project is filling the gaps between consecutive contigs in the scaffolds. This problem can be naturally stated as finding an s-t path in a directed graph whose sum of arc costs belongs to a given range (the estimate on the gap length). Here s and t are any two contigs flanking a gap. This problem is known to be NP-hard in general. Here we derive a simpler dynamic programming solution than already known, pseudo-polynomial in the maximum value of the input range. We implemented various practical optimizations to it, and compared our exact gap-filling solution experimentally to popular gap-filling tools. Summing over all the bacterial assemblies considered in our experiments, we can in total fill 76% more gaps than the best previous tool, and the gaps filled by our method span 136% more sequence. Furthermore, the error level of the newly introduced sequence is comparable to that of the previous tools. The experiments also show that our exact approach does not easily scale to larger genomes, where the problem is in general difficult for all tools. PMID:26959081

  19. Controlling the optical path length in turbid media using differential path-length spectroscopy: fiber diameter dependence.

    PubMed

    Kaspers, O P; Sterenborg, H J C M; Amelink, A

    2008-01-20

    We have characterized the path length for the differential path-length spectroscopy (DPS) fiber optic geometry for a wide range of optical properties and for fiber diameters ranging from 200 microm to 1000 microm. Phantom measurements show that the path length is nearly constant for scattering coefficients in the range 5 mm(-1)< micros <50 mm(-1) for all fiber diameters and that the path length is proportional to the fiber diameter. The path length decreases with increasing absorption for all fiber diameters, and this effect is more pronounced for larger fiber diameters. An empirical model is formulated that relates the DPS path length to total absorption for all fiber diameters simultaneously.

  20. Optical Arc-Length Sensor For TIG Welding

    NASA Technical Reports Server (NTRS)

    Smith, Matthew A.

    1990-01-01

    Proposed subsystem of tungsten/inert-gas (TIG) welding system measures length of welding arc optically. Viewed by video camera, in one of three alternative optical configurations. Length of arc measured instead of inferred from voltage.