Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A
2017-11-07
Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Method and apparatus for determining minority carrier diffusion length in semiconductors
Goldstein, Bernard; Dresner, Joseph; Szostak, Daniel J.
1983-07-12
Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil S.
2006-01-01
A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.
Constructing Current Singularity in a 3D Line-tied Plasma
Zhou, Yao; Huang, Yi-Min; Qin, Hong; ...
2017-12-27
We revisit Parker's conjecture of current singularity formation in 3D line-tied plasmas using a recently developed numerical method, variational integration for ideal magnetohydrodynamics in Lagrangian labeling. With the frozen-in equation built-in, the method is free of artificial reconnection, and hence it is arguably an optimal tool for studying current singularity formation. Using this method, the formation of current singularity has previously been confirmed in the Hahm–Kulsrud–Taylor problem in 2D. In this paper, we extend this problem to 3D line-tied geometry. The linear solution, which is singular in 2D, is found to be smooth for arbitrary system length. However, with finitemore » amplitude, the linear solution can become pathological when the system is sufficiently long. The nonlinear solutions turn out to be smooth for short systems. Nonetheless, the scaling of peak current density versus system length suggests that the nonlinear solution may become singular at finite length. Finally, with the results in hand, we can neither confirm nor rule out this possibility conclusively, since we cannot obtain solutions with system length near the extrapolated critical value.« less
Umari, P; Marzari, Nicola
2009-09-07
We calculate the linear and nonlinear susceptibilities of periodic longitudinal chains of hydrogen dimers with different bond-length alternations using a diffusion quantum Monte Carlo approach. These quantities are derived from the changes in electronic polarization as a function of applied finite electric field--an approach we recently introduced and made possible by the use of a Berry-phase, many-body electric-enthalpy functional. Calculated susceptibilities and hypersusceptibilities are found to be in excellent agreement with the best estimates available from quantum chemistry--usually extrapolations to the infinite-chain limit of calculations for chains of finite length. It is found that while exchange effects dominate the proper description of the susceptibilities, second hypersusceptibilities are greatly affected by electronic correlations. We also assess how different approximations to the nodal surface of the many-body wave function affect the accuracy of the calculated susceptibilities.
Tien, Christopher J; Winslow, James F; Hintenlang, David E
2011-01-31
In helical computed tomography (CT), reconstruction information from volumes adjacent to the clinical volume of interest (VOI) is required for proper reconstruction. Previous studies have relied upon either operator console readings or indirect extrapolation of measurements in order to determine the over-ranging length of a scan. This paper presents a methodology for the direct quantification of over-ranging dose contributions using real-time dosimetry. A Siemens SOMATOM Sensation 16 multislice helical CT scanner is used with a novel real-time "point" fiber-optic dosimeter system with 10 ms temporal resolution to measure over-ranging length, which is also expressed in dose-length-product (DLP). Film was used to benchmark the exact length of over-ranging. Over-ranging length varied from 4.38 cm at pitch of 0.5 to 6.72 cm at a pitch of 1.5, which corresponds to DLP of 131 to 202 mGy-cm. The dose-extrapolation method of Van der Molen et al. yielded results within 3%, while the console reading method of Tzedakis et al. yielded consistently larger over-ranging lengths. From film measurements, it was determined that Tzedakis et al. overestimated over-ranging lengths by one-half of beam collimation width. Over-ranging length measured as a function of reconstruction slice thicknesses produced two linear regions similar to previous publications. Over-ranging is quantified with both absolute length and DLP, which contributes about 60 mGy-cm or about 10% of DLP for a routine abdominal scan. This paper presents a direct physical measurement of over-ranging length within 10% of previous methodologies. Current uncertainties are less than 1%, in comparison with 5% in other methodologies. Clinical implantation can be increased by using only one dosimeter if codependence with console readings is acceptable, with an uncertainty of 1.1% This methodology will be applied to different vendors, models, and postprocessing methods--which have been shown to produce over-ranging lengths differing by 125%.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less
Semiempirical Theories of the Affinities of Negative Atomic Ions
NASA Technical Reports Server (NTRS)
Edie, John W.
1961-01-01
The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Kosek, Wiesław
2008-02-01
This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.
Softening of the stiffness of bottle-brush polymers by mutual interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bolisetty, S.; Airaud, C.; Rosenfeldt, S.
2007-04-15
We study bottle-brush macromolecules in a good solvent by small-angle neutron scattering (SANS), static light scattering (SLS), and dynamic light scattering (DLS). These polymers consist of a linear backbone to which long side chains are chemically grafted. The backbone contains about 1600 monomer units (weight average) and every second monomer unit carries side chains with approximately 60 monomer units. The SLS and SANS data extrapolated to infinite dilution lead to the form factor of the polymer that can be described in terms of a wormlike chain with a contour length of 380 nm and a persistence length of 17.5 nm.more » An analysis of the DLS data confirms these model parameters. The scattering intensities taken at finite concentration can be modeled using the polymer reference interaction site model. It reveals a softening of the bottle-brush polymers caused by their mutual interaction. We demonstrate that the persistence decreases from 17.5 nm down to 5 nm upon increasing the concentration from dilute solution to the highest concentration (40.59 g/l) under consideration. The observed softening of the chains is comparable to the theoretically predicted decrease of the electrostatic persistence length of linear polyelectrolyte chains at finite concentrations.« less
How fast does water flow in carbon nanotubes?
Kannam, Sridhar Kumar; Todd, B D; Hansen, J S; Daivis, Peter J
2013-03-07
The purpose of this paper is threefold. First, we review the existing literature on flow rates of water in carbon nanotubes. Data for the slip length which characterizes the flow rate are scattered over 5 orders of magnitude for nanotubes of diameter 0.81-10 nm. Second, we precisely compute the slip length using equilibrium molecular dynamics (EMD) simulations, from which the interfacial friction between water and carbon nanotubes can be found, and also via external field driven non-equilibrium molecular dynamics simulations (NEMD). We discuss some of the issues in simulation studies which may be reasons for the large disagreements reported. By using the EMD method friction coefficient to determine the slip length, we overcome the limitations of NEMD simulations. In NEMD simulations, for each tube we apply a range of external fields to check the linear response of the fluid to the field and reliably extrapolate the results for the slip length to values of the field corresponding to experimentally accessible pressure gradients. Finally, we comment on several issues concerning water flow rates in carbon nanotubes which may lead to some future research directions in this area.
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
Cathode fall measurement in a dielectric barrier discharge in helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Yanpeng; Zheng, Bin; Liu, Yaoge
2013-11-15
A method based on the “zero-length voltage” extrapolation is proposed to measure cathode fall in a dielectric barrier discharge. Starting, stable, and discharge-maintaining voltages were measured to obtain the extrapolation zero-length voltage. Under our experimental conditions, the “zero-length voltage” gave a cathode fall of about 185 V. Based on the known thickness of the cathode fall region, the spatial distribution of the electric field strength in dielectric barrier discharge in atmospheric helium is determined. The strong cathode fall with a maximum field value of approximately 9.25 kV/cm was typical for the glow mode of the discharge.
Linear prediction data extrapolation superresolution radar imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing
1993-05-01
Range resolution and cross-range resolution of range-doppler imaging radars are related to the effective bandwidth of transmitted signal and the angle through which the object rotates relatively to the radar line of sight (RLOS) during the coherent processing time, respectively. In this paper, linear prediction data extrapolation discrete Fourier transform (LPDEDFT) superresolution imaging method is investigated for the purpose of surpassing the limitation imposed by the conventional FFT range-doppler processing and improving the resolution capability of range-doppler imaging radar. The LPDEDFT superresolution imaging method, which is conceptually simple, consists of extrapolating observed data beyond the observation windows by means of linear prediction, and then performing the conventional IDFT of the extrapolated data. The live data of a metalized scale model B-52 aircraft mounted on a rotating platform in a microwave anechoic chamber and a flying Boeing-727 aircraft were processed. It is concluded that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle of the object or equal-quality images from smaller bandwidth and total angle may be obtained by LPDEDFT.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
NASA Astrophysics Data System (ADS)
Knight, Kevin S.; Marshall, William G.; Hawkins, Philip M.
2014-06-01
The fluoroperovskite phase RbCaF3 has been investigated using high-pressure neutron powder diffraction in the pressure range ~0-7.9 GPa at room temperature. It has been found to undergo a first-order high-pressure structural phase transition at ~2.8 GPa from the cubic aristotype phase to a hettotype phase in the tetragonal space group I4/ mcm. This transition, which also occurs at ~200 K at ambient pressure, is characterised by a linear phase boundary and a Clapeyron slope of 2.96 × 10-5 GPa K-1, which is in excellent agreement with earlier, low-pressure EPR investigations. The bulk modulus of the high-pressure phase (49.1 GPa) is very close to that determined for the low-pressure phase (50.0 GPa), and both are comparable with those determined for the aristotype phases of CsCdF3, TlCdF3, RbCdF3, and KCaF3. The evolution of the order parameter with pressure is consistent with recent modifications to Landau theory and, in conjunction with polynomial approximations to the pressure dependence of the lattice parameters, permits the pressure variation of the bond lengths and angles to be predicted. On entering the high-pressure phase, the Rb-F bond lengths decrease from their extrapolated values based on a third-order Birch-Murnaghan fit to the aristotype equation of state. By contrast, the Ca-F bond lengths behave atypically by exhibiting an increase from their extrapolated magnitudes, resulting in the volume and the effective bulk modulus of the CaF6 octahedron being larger than the cubic phase. The bulk moduli for the two component polyhedra in the tetragonal phase are comparable with those determined for the constituent binary fluorides, RbF and CaF2.
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo
NASA Astrophysics Data System (ADS)
Umari, Paolo
2006-03-01
We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).
Hall, E J
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
NASA Technical Reports Server (NTRS)
Hall, E. J.
2001-01-01
The possible risk of induced malignancies in astronauts, as a consequence of the radiation environment in space, is a factor of concern for long term missions. Cancer risk estimates for high doses of low LET radiation are available from the epidemiological studies of the A-bomb survivors. Cancer risks at lower doses cannot be detected in epidemiological studies and must be inferred by extrapolation from the high dose risks. The standard setting bodies, such as the ICRP recommend a linear, no-threshold extrapolation of risks from high to low doses, but this is controversial. A study of mechanisms of carcinogenesis may shed some light on the validity of a linear extrapolation. The multi-step nature of carcinogenesis suggests that the role of radiation may be to induce a mutation leading to a mutator phenotype. High energy Fe ions, such as those encountered in space are highly effective in inducing genomic instability. Experiments involving the single particle microbeam have demonstrated a "bystander effect", ie a biological effect in cells not themselves hit, but in close proximity to those that are, as well as the induction of mutations in cells where only the cytoplasm, and not the nucleus, have been traversed by a charged particle. These recent experiments cast doubt on the validity of a simple linear extrapolation, but the data are so far fragmentary and conflicting. More studies are necessary. While mechanistic studies cannot replace epidemiology as a source of quantitative risk estimates, they may shed some light on the shape of the dose response relationship and therefore on the limitations of a linear extrapolation to low doses.
Extrapolating bound state data of anions into the metastable domain
NASA Astrophysics Data System (ADS)
Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.
2004-10-01
Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.
Scaling behavior of ground-state energy cluster expansion for linear polyenes
NASA Astrophysics Data System (ADS)
Griffin, L. L.; Wu, Jian; Klein, D. J.; Schmalz, T. G.; Bytautas, L.
Ground-state energies for linear-chain polyenes are additively expanded in a sequence of terms for chemically relevant conjugated substructures of increasing size. The asymptotic behavior of the large-substructure limit (i.e., high-polymer limit) is investigated as a means of characterizing the rapidity of convergence and consequent utility of this energy cluster expansion. Consideration is directed to computations via: simple Hückel theory, a refined Hückel scheme with geometry optimization, restricted Hartree-Fock self-consistent field (RHF-SCF) solutions of fixed bond-length Parisier-Parr-Pople (PPP)/Hubbard models, and ab initio SCF approaches with and without geometry optimization. The cluster expansion in what might be described as the more "refined" approaches appears to lead to qualitatively more rapid convergence: exponentially fast as opposed to an inverse power at the simple Hückel or SCF-Hubbard levels. The substructural energy cluster expansion then seems to merit special attention. Its possible utility in making accurate extrapolations from finite systems to extended polymers is noted.
NASA Technical Reports Server (NTRS)
Siclari, M. J.
1992-01-01
A CFD analysis of the near-field sonic boom environment of several low boom High Speed Civilian Transport (HSCT) concepts is presented. The CFD method utilizes a multi-block Euler marching code within the context of an innovative mesh topology that allows for the resolution of shock waves several body lengths from the aircraft. Three-dimensional pressure footprints at one body length below three-different low boom aircraft concepts are presented. Models of two concepts designed by NASA to cruise at Mach 2 and Mach 3 were built and tested in the wind tunnel. The third concept was designed by Boeing to cruise at Mach 1.7. Centerline and sideline samples of these footprints are then extrapolated to the ground using a linear waveform parameter method to estimate the ground signatures or sonic boom ground overpressure levels. The Mach 2 concept achieved its centerline design signature but indicated higher sideline booms due to the outboard wing crank of the configuration. Nacelles are also included on two of NASA's low boom concepts. Computations are carried out for both flow-through nacelles and nacelles with engine exhaust simulation. The flow-through nacelles with the assumption of zero spillage and zero inlet lip radius showed very little effect on the sonic boom signatures. On the other hand, it was shown that the engine exhaust plumes can have an effect on the levels of overpressure reaching the ground depending on the engine operating conditions. The results of this study indicate that engine integration into a low boom design should be given some attention.
Extrapolation of operators acting into quasi-Banach spaces
NASA Astrophysics Data System (ADS)
Lykov, K. V.
2016-01-01
Linear and sublinear operators acting from the scale of L_p spaces to a certain fixed quasinormed space are considered. It is shown how the extrapolation construction proposed by Jawerth and Milman at the end of 1980s can be used to extend a bounded action of an operator from the L_p scale to wider spaces. Theorems are proved which generalize Yano's extrapolation theorem to the case of a quasinormed target space. More precise results are obtained under additional conditions on the quasinorm. Bibliography: 35 titles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Torok, Aaron
The {pi}{sup +}{Sigma}{sup +} and {pi}{sup +}{Xi}{sup 0} scattering lengths were calculated in mixed-action Lattice QCD with domain-wall valence quarks on the asqtad-improved coarse MILC configurations at four light-quark masses, and at two light-quark masses on the fine MILC configurations. Heavy Baryon Chiral Perturbation Theory with two and three flavors of light quarks was used to perform the chiral extrapolations. To NNLO in the three-flavor chiral expansion, the kaon-baryon processes that were investigated show no signs of convergence. Using the two-flavor chiral expansion for extrapolation, the pion-hyperon scattering lengths are found to be a{sub {pi}}{sup +}{sub {Sigma}}{sup +} = -0.197{+-}0.017more » fm, and a{sub {pi}}{sup +}{sub {Xi}}{sup 0} = -0.098{+-}0.017 fm, where the comprehensive error includes statistical and systematic uncertainties.« less
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Verevkin, Sergey P; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Yermalayeu, Andrei V; Schick, Christoph; Liu, Hongjun; Maginn, Edward J; Bulut, Safak; Krossing, Ingo; Kalb, Roland
2013-05-30
Vaporization enthalpy of an ionic liquid (IL) is a key physical property for applications of ILs as thermofluids and also is useful in developing liquid state theories and validating intermolecular potential functions used in molecular modeling of these liquids. Compilation of the data for a homologous series of 1-alkyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([C(n)mim][NTf2]) ILs has revealed an embarrassing disarray of literature results. New experimental data, based on the concurring results from quartz crystal microbalance, thermogravimetric analyses, and molecular dynamics simulation have revealed a clear linear dependence of IL vaporization enthalpies on the chain length of the alkyl group on the cation. Ambiguity of the procedure for extrapolation of vaporization enthalpies to the reference temperature 298 K was found to be a major source of the discrepancies among previous data sets. Two simple methods for temperature adjustment of vaporization enthalpies have been suggested. Resulting vaporization enthalpies obey group additivity, although the values of the additivity parameters for ILs are different from those for molecular compounds.
Area, length and thickness conservation: Dogma or reality?
NASA Astrophysics Data System (ADS)
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Mode 1 crack surface displacements for a round compact specimen subject to a couple and force
NASA Technical Reports Server (NTRS)
Gross, B.
1979-01-01
Mode I displacement coefficients along the crack surface are presented for a radially cracked round compact specimen, treated as a plane elastostatic problem, subjected to two types of loading; a uniform tensile stress and a nominal bending stress distribution across the net section. By superposition the resultant displacement coefficient or the corresponding influence coefficient can be obtained for any practical load location. Load line displacements are presented for A/D ratios ranging from 0.40 to 0.95, where A is the crack length measured from the crack mouth to the crack tip and D is the specimen diameter. Through a linear extrapolation procedure crack mouth displacements are also obtained. Experimental evidence shows that the results are valid over the range of A/D ratios analyzed for a practical pin loaded round compact specimen.
An Experimental Investigation of Chemically-Reacting, Gas-Phase Turbulent Jets
1991-04-12
the work is that the flame length , as estimated from the temperature measurements, varies with changes in Reynolds number, suggesting that the mixing...field flame length extrapolated to phi = 0, that increases with increasing Re for Re 20,000 and then decreases with increasing Re for Re = 20,000. The
Derosa, Pedro A
2009-06-01
A computationally cheap approach combining time-independent density functional theory (TIDFT) and semiempirical methods with an appropriate extrapolation procedure is proposed to accurately estimate geometrical and electronic properties of conjugated polymers using just a small set of oligomers. The highest occupied molecular orbital-lowest unoccupied molecular orbital gap (HLG) obtained at a TIDFT level (B3PW91) for two polymers, trans-polyacetylene--the simplest conjugated polymer, and a much larger poly(2-methoxy-5-(2,9-ethyl-hexyloxy)-1,4-phenylenevinylene (MEH-PPV) polymer converge to virtually the same asymptotic value than the excitation energy obtained with time-dependent DFT (TDDFT) calculations using the same functional. For TIDFT geometries, the HLG is found to converge to a value within the experimentally accepted range for the band gap of these polymers, when an exponential extrapolation is used; however if semiempirical geometries are used, a linear fit of the HLG versus 1/n is found to produce the best results. Geometrical parameters are observed to reach a saturation value in good agreement with experimental information, within the length of oligomers calculated here and no extrapolation was considered necessary. Finally, the performance of three different semiempirical methods (AM1, PM3, and MNDO) and for the TIDFT calculations, the performance of 7 different full electron basis sets (6-311+G**, 6-31+ +G**, 6-311+ +G**, 6-31+G**, 6-31G**, 6-31+G*, and 6-31G) is compared and it is determined that the choice of semiempirical method or the basis set does not significantly affect the results. 2008 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto
2017-08-01
The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.
Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K
2013-12-01
Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.
Nonlinear cancer response at ultralow dose: a 40800-animal ED(001) tumor and biomarker study.
Bailey, George S; Reddy, Ashok P; Pereira, Clifford B; Harttig, Ulrich; Baird, William; Spitsbergen, Jan M; Hendricks, Jerry D; Orner, Gayle A; Williams, David E; Swenberg, James A
2009-07-01
Assessment of human cancer risk from animal carcinogen studies is severely limited by inadequate experimental data at environmentally relevant exposures and by procedures requiring modeled extrapolations many orders of magnitude below observable data. We used rainbow trout, an animal model well-suited to ultralow-dose carcinogenesis research, to explore dose-response down to a targeted 10 excess liver tumors per 10000 animals (ED(001)). A total of 40800 trout were fed 0-225 ppm dibenzo[a,l]pyrene (DBP) for 4 weeks, sampled for biomarker analyses, and returned to control diet for 9 months prior to gross and histologic examination. Suspect tumors were confirmed by pathology, and resulting incidences were modeled and compared to the default EPA LED(10) linear extrapolation method. The study provided observed incidence data down to two above-background liver tumors per 10000 animals at the lowest dose (that is, an unmodeled ED(0002) measurement). Among nine statistical models explored, three were determined to fit the liver data well-linear probit, quadratic logit, and Ryzin-Rai. None of these fitted models is compatible with the LED(10) default assumption, and all fell increasingly below the default extrapolation with decreasing DBP dose. Low-dose tumor response was also not predictable from hepatic DBP-DNA adduct biomarkers, which accumulated as a power function of dose (adducts = 100 x DBP(1.31)). Two-order extrapolations below the modeled tumor data predicted DBP doses producing one excess cancer per million individuals (ED(10)(-6)) that were 500-1500-fold higher than that predicted by the five-order LED(10) extrapolation. These results are considered specific to the animal model, carcinogen, and protocol used. They provide the first experimental estimation in any model of the degree of conservatism that may exist for the EPA default linear assumption for a genotoxic carcinogen.
Effect of Curved Radial Vane Cavity Arrangements on Predicted Inter-Turbine Burner (ITB) Performance
2007-06-01
on for only short duration and at certain points in the conditions and with flame lengths up to 50% shorter than aircraft’s mission profile to...remaining exit parameters are as finite-rate flame length , combustion heat release, and extrapolated from the interior domain. Mass flow rates ITB exit
2006-07-01
linearity; (4) determination of polarization as a function of radiographic parameters ; and (5) determination of the effect of binding energy on... hydroxyapatite . Type II calcifications are known to be associated with carcinoma, while it is generally accepted that the exclusive finding of type I...concentrate on the extrapolation of the Rh target spectra. The extrapolation was split in two parts. Below 24 keV we used the parameters from Boone’s paper
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
Mode I crack surface displacements for a round compact specimen subject to a couple and force
NASA Technical Reports Server (NTRS)
Gross, B.
1979-01-01
Mode I displacement coefficients along the crack surface are presented for a radially cracked round compact specimen, treated as a plane elastostatic problem, subjected to two types of loading; a uniform tensile stress and a nominal bending stress distribution across the net section. By superposition the resultant displacement coefficient or the corresponding influence coefficient can be obtained for any practical load location. Load line displacements are presented for A/D ratios ranging from 0.40 to 0.95, where A is the crack length measured from the crack mouth to the crack tip and D is the specimen diameter. Through a linear extrapolation procedure crack mouth displacements are also obtained. Experimental evidence shows that the results of this study are valid over the range of A/D ratios analyzed for a practical pin loaded round compact specimen.
Present constraints on the H-dibaryon at the physical point from Lattice QCD
Beane, S. R.; Chang, E.; Detmold, W.; ...
2011-11-10
The current constraints from Lattice QCD on the existence of the H-dibaryon are discussed. With only two significant Lattice QCD calculations of the H-dibaryon binding energy at approximately the same lattice spacing, the form of the chiral and continuum extrapolations to the physical point are not determined. In this brief report, an extrapolation that is quadratic in the pion mass, motivated by low-energy effective field theory, is considered. An extrapolation that is linear in the pion mass is also considered, a form that has no basis in the effective field theory, but is found to describe the light-quark mass dependencemore » observed in Lattice QCD calculations of the octet baryon masses. In both cases, the extrapolation to the physical pion mass allows for a bound H-dibaryon or a near-threshold scattering state.« less
Non-linearities in Holocene floodplain sediment storage
NASA Astrophysics Data System (ADS)
Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten
2013-04-01
Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows that a strong multifractality is present in the scaling relationship between sediment storage and catchment area, depending on geomorphic landscape properties. Extrapolation of data from one spatial scale to another inevitably leads to large errors: when only the data of the upper floodplains are considered, a regression analysis results in an overestimation of total floodplain deposition for the entire catchment of circa 115%. This example demonstrates multifractality and related non-linearity in scaling relationships, which influences extrapolations beyond the initial range of measurements. These different examples indicate how traditional extrapolation techniques and assumptions in sediment budget studies can be challenged by field data, further complicating our understanding of these systems. Although simplifications are often necessary when working on large spatial scale, such non-linearities may form challenges for a better understanding of system behavior.
NASA Astrophysics Data System (ADS)
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
Theory of end-labeled free-solution electrophoresis: is the end effect important?
Chubynsky, Mykyta V; Slater, Gary W
2014-03-01
In the theory of free-solution electrophoresis of a polyelectrolyte (such as the DNA) conjugated with a "drag-tag," the conjugate is divided into segments of equal hydrodynamic friction and its electrophoretic mobility is calculated as a weighted average of the mobilities of individual segments. If all the weights are assumed equal, then for an electrically neutral drag-tag, the elution time t is predicted to depend linearly on the inverse DNA length 1/M. While it is well-known that the equal-weights assumption is approximate and in reality the weights increase toward the ends, this "end effect" has been assumed to be small, since in experiments the t(1/M) dependence seems to be nearly perfectly linear. We challenge this assumption pointing out that some experimental linear fits do not extrapolate to the free (i.e. untagged) DNA elution time in the limit 1/M→0, indicating nonlinearity outside the fitting range. We show that a theory for a flexible polymer taking the end effect into account produces a nonlinear curve that, however, can be fitted with a straight line over a limited range of 1/M typical of experiments, but with a "wrong" intercept, which explains the experimental results without additional assumptions. We also study the influence of the flexibilities of the charged and neutral parts. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Lee, G. H.; Arnold, S. T.; Eaton, J. G.; Sarkas, H. W.; Bowen, K. H.; Ludewigt, C.; Haberland, H.
1991-03-01
The photodetachment spectra of (H2O){/n =2-69/-} and (NH3){/n =41-1100/-} have been recorded, and vertical detachment energies (VDEs) were obtained from the spectra. For both systems, the cluster anion VDEs increase smoothly with increasing sizes and most species plot linearly with n -1/3, extrapolating to a VDE ( n=∞) value which is very close to the photoelectric threshold energy for the corresponding condensed phase solvated electron system. The linear extrapolation of this data to the analogous condensed phase property suggests that these cluster anions are gas phase counterparts to solvated electrons, i.e. they are embryonic forms of hydrated and ammoniated electrons which mature with increasing cluster size toward condensed phase solvated electrons.
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Fluxes all of the time? A primer on the temporal representativeness of FLUXNET
NASA Astrophysics Data System (ADS)
Chu, Housen; Baldocchi, Dennis D.; John, Ranjeet; Wolf, Sebastian; Reichstein, Markus
2017-02-01
FLUXNET, the global network of eddy covariance flux towers, provides the largest synthesized data set of CO2, H2O, and energy fluxes. To achieve the ultimate goal of providing flux information "everywhere and all of the time," studies have attempted to address the representativeness issue, i.e., whether measurements taken in a set of given locations and measurement periods can be extrapolated to a space- and time-explicit extent (e.g., terrestrial globe, 1982-2013 climatological baseline). This study focuses on the temporal representativeness of FLUXNET and tests whether site-specific measurement periods are sufficient to capture the natural variability of climatological and biological conditions. FLUXNET is unevenly representative across sites in terms of the measurement lengths and potentials of extrapolation in time. Similarity of driver conditions among years generally enables the extrapolation of flux information beyond measurement periods. Yet such extrapolation potentials are further constrained by site-specific variability of driver conditions. Several driver variables such as air temperature, diurnal temperature range, potential evapotranspiration, and normalized difference vegetation index had detectable trends and/or breakpoints within the baseline period, and flux measurements generally covered similar and biased conditions in those drivers. About 38% and 60% of FLUXNET sites adequately sampled the mean conditions and interannual variability of all driver conditions, respectively. For long-record sites (≥15 years) the percentages increased to 59% and 69%, respectively. However, the justification of temporal representativeness should not rely solely on the lengths of measurements. Whenever possible, site-specific consideration (e.g., trend, breakpoint, and interannual variability in drivers) should be taken into account.
Error minimization algorithm for comparative quantitative PCR analysis: Q-Anal.
OConnor, William; Runquist, Elizabeth A
2008-07-01
Current methods for comparative quantitative polymerase chain reaction (qPCR) analysis, the threshold and extrapolation methods, either make assumptions about PCR efficiency that require an arbitrary threshold selection process or extrapolate to estimate relative levels of messenger RNA (mRNA) transcripts. Here we describe an algorithm, Q-Anal, that blends elements from current methods to by-pass assumptions regarding PCR efficiency and improve the threshold selection process to minimize error in comparative qPCR analysis. This algorithm uses iterative linear regression to identify the exponential phase for both target and reference amplicons and then selects, by minimizing linear regression error, a fluorescence threshold where efficiencies for both amplicons have been defined. From this defined fluorescence threshold, cycle time (Ct) and the error for both amplicons are calculated and used to determine the expression ratio. Ratios in complementary DNA (cDNA) dilution assays from qPCR data were analyzed by the Q-Anal method and compared with the threshold method and an extrapolation method. Dilution ratios determined by the Q-Anal and threshold methods were 86 to 118% of the expected cDNA ratios, but relative errors for the Q-Anal method were 4 to 10% in comparison with 4 to 34% for the threshold method. In contrast, ratios determined by an extrapolation method were 32 to 242% of the expected cDNA ratios, with relative errors of 67 to 193%. Q-Anal will be a valuable and quick method for minimizing error in comparative qPCR analysis.
From repulsive to attractive glass: A rheological investigation.
Zhou, Zhi; Jia, Di; Hollingsworth, Javoris V; Cheng, He; Han, Charles C
2015-12-21
Linear rheological properties and yielding behavior of polystyrene core and poly (N-isopropylacrylamide) (PNIPAM) shell microgels were investigated to understand the transition from repulsive glass (RG) to attractive glass (AG) and the A3 singularity. Due to the volume phase transition of PNIPAM in aqueous solution, the microgel-microgel interaction potential gradually changes from repulsive to attractive. In temperature and frequency sweep experiments, the storage modulus (G') and loss modulus (G″) increased discontinuously when crossing the RG-to-AG transition line, while G' at low frequency exhibited a different volume fraction (Φ) dependence. By fitting the data of RG and AG, and then extrapolating to high volume fraction, the difference between RG and AG decreased and the existence of A3 singularity was verified. Dynamic strain sweep experiments were conducted to confirm these findings. RG at 25 °C exhibited one-step yielding, whereas AG at 40 °C showed a typical two-step yielding behavior; the first yielding strain remained constant and the second one gradually decreased as the volume fraction increased. By extrapolating the second yield strain to that of the first one, the predicted A3 singularity was at 0.61 ± 0.02. At 37 °C, when Φeff = 0.59, AG showed one step yielding as the length of the attractive bond increased. The consistency and agreement of the experimental results reaffirmed the existence of A3 singularity, where the yielding behavior of RG and AG became identical.
Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.
2013-01-01
Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Methods Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1–9 years per site from 1998 to 2011. Key Results The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern Conclusions The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions. PMID:24201138
Variational Integration for Ideal Magnetohydrodynamics and Formation of Current Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yao
Coronal heating has been a long-standing conundrum in solar physics. Parker's conjecture that spontaneous current singularities lead to nanoflares that heat the corona has been controversial. In ideal magnetohydrodynamics (MHD), can genuine current singularities emerge from a smooth 3D line-tied magnetic field? To numerically resolve this issue, the schemes employed must preserve magnetic topology exactly to avoid artificial reconnection in the presence of (nearly) singular current densities. Structure-preserving numerical methods are favorable for mitigating numerical dissipation, and variational integration is a powerful machinery for deriving them. However, successful applications of variational integration to ideal MHD have been scarce. In thismore » thesis, we develop variational integrators for ideal MHD in Lagrangian labeling by discretizing Newcomb's Lagrangian on a moving mesh using discretized exterior calculus. With the built-in frozen-in equation, the schemes are free of artificial reconnection, hence optimal for studying current singularity formation. Using this method, we first study a fundamental prototype problem in 2D, the Hahm-Kulsrud-Taylor (HKT) problem. It considers the effect of boundary perturbations on a 2D plasma magnetized by a sheared field, and its linear solution is singular. We find that with increasing resolution, the nonlinear solution converges to one with a current singularity. The same signature of current singularity is also identified in other 2D cases with more complex magnetic topologies, such as the coalescence instability of magnetic islands. We then extend the HKT problem to 3D line-tied geometry, which models the solar corona by anchoring the field lines in the boundaries. The effect of such geometry is crucial in the controversy over Parker's conjecture. The linear solution, which is singular in 2D, is found to be smooth. However, with finite amplitude, it can become pathological above a critical system length. The nonlinear solution turns out smooth for short systems. Nonetheless, the scaling of peak current density vs. system length suggests that the nonlinear solution may become singular at a finite length. With the results in hand, we cannot confirm or rule out this possibility conclusively, since we cannot obtain solutions with system lengths near the extrapolated critical value.« less
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
A study of alternative schemes for extrapolation of secular variation at observatories
Alldredge, L.R.
1976-01-01
The geomagnetic secular variation is not well known. This limits the useful life of geomagnetic models. The secular variation is usually assumed to be linear with time. It is found that attenative schemes that employ quasiperiodic variations from internal and external sources can improve the extrapolation of secular variation at high-quality observatories. Although the schemes discussed are not yet fully applicable in worldwide model making, they do suggest some basic ideas that may be developed into useful tools in future model work. ?? 1976.
Extrapolation to Nonequilibrium from Coarse-Grained Response Theory
NASA Astrophysics Data System (ADS)
Basu, Urna; Helden, Laurent; Krüger, Matthias
2018-05-01
Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.
Predicting the Where and the How Big of Solar Flares
NASA Astrophysics Data System (ADS)
Barnes, Graham; Leka, K. D.; Gilchrist, Stuart
2017-08-01
The approach to predicting solar flares generally characterizes global properties of a solar active region, for example the total magnetic flux or the total length of a sheared magnetic neutral line, and compares new data (from which to make a prediction) to similar observations of active regions and their associated propensity for flare production. We take here a different tack, examining solar active regions in the context of their energy storage capacity. Specifically, we characterize not the region as a whole, but summarize the energy-release prospects of different sub-regions within, using a sub-area analysis of the photospheric boundary, the CFIT non-linear force-free extrapolation code, and the Minimum Current Corona model. We present here early results from this approach whose objective is to understand the different pathways available for regions to release stored energy, thus eventually providing better estimates of the where (what sub-areas are storing how much energy) and the how big (how much energy is stored, and how much is available for release) of solar flares.
On the relations between cratonic lithosphere thickness, plate motions, and basal drag
Artemieva, I.M.; Mooney, W.D.
2002-01-01
An overview of seismic, thermal, and petrological evidence on the structure of Precambrian lithosphere suggests that its local maximum thickness is highly variable (140-350 km), with a bimodal distribution for Archean cratons (200-220 km and 300-350 km). We discuss the origin of such large differences in lithospheric thickness, and propose that the lithospheric base can have large depth variations over short distances. The topography of Bryce Canyon (western USA) is proposed as an inverted analog of the base of the lithosphere. The horizontal and vertical dimensions of Archean cratons are strongly correlated: larger cratons have thicker lithosphere. Analysis of the bimodal distribution of lithospheric thickness in Archean cratons shows that the "critical" surface area for cratons to have thick (>300 km) keels is >6-8 ?? 106 km2 . Extrapolation of the linear trend between Archean lithospheric thickness and cratonic area to zero area yields a thickness of 180 km. This implies that the reworking of Archean crust should be accompanied by thinning and reworking of the entire lithospheric column to a thickness of 180 km in accord with thickness estimates for Proterozoic lithosphere. Likewise, extrapolation of the same trend to the size equal to the total area of all Archean cratons implies that the lithospheric thickness of a hypothesized early Archean supercontinent could have been 350-450 km decreasing to 280-400 km for Gondwanaland. We evaluate the basal drag model as a possible mechanism that may thin the cratonic lithosphere. Inverse correlations are found between lithospheric thickness and (a) fractional subduction length and (b) the effective ridge length. In agreement with theoretical predictions, lithospheric thickness of Archean keels is proportional to the square root of the ratio of the craton length (along the direction of plate motion) to the plate velocity. Large cratons with thick keels and low plate velocities are less eroded by basal drag than small fast-moving cratons. Basal drag may have varied in magnitude over the past 4 Ga. Higher mantle temperatures in the Archean would have resulted in lower mantle viscosity. This in turn would have reduced basal drag and basal erosion, and promoted the preservation of thick (>300 km) Archean keels, even if plate velocities were high during the Archean. ?? 2002 Elsevier Science B.V. All rights reserved.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
NASA Astrophysics Data System (ADS)
Baek, Sang-In; Kim, Sung-Jo; Kim, Jong-Hyun
2015-09-01
Although the homeotropic alignment of liquid crystals is widely used in LCD TVs, no easy method exists to measure its anchoring coefficient. In this study, we propose an easy and convenient measurement technique in which a polarizing optical microscope is used in the reflective mode with an objective lens having a low depth of focus. All measurements focus on the reflection of light near the interface between the liquid crystal and alignment layer. The change in the reflected light is measured by applying an electric field. We model the response of the director of the liquid crystal to the electric field and, thus, the change in reflectance. By adjusting the extrapolation length in the calculation, we match the experimental and calculated results and obtain the anchoring coefficient. In our experiment, the extrapolation lengths were 0.31 ± 0.04 μm, 0.32 ± 0.08 μm, and 0.23 ± 0.05 μm for lecithin, AL-64168, and SE-5662, respectively.
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
Tran, Van; Little, Mark P
2017-11-01
Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and non-malignant, show downward curvature in the dose response, and for most endpoints this is statistically significant (p < 0.05). Associated with this, the low-dose extrapolation factor associated with neutron exposure is generally statistically significantly less than 1 for most malignant and non-malignant endpoints, with central estimates mostly in the range 0.1-0.9. In contrast to the situation at higher dose rates, there are statistically non-significant decreases of risk per unit dose at gamma dose rates of less than or equal to 5 mGy/h for most malignant endpoints, and generally non-significant increases in risk per unit dose at gamma dose rates ≤5 mGy/h for most non-malignant endpoints. Associated with this, the dose-rate extrapolation factor, the ratio of high dose-rate to low dose-rate (≤5 mGy/h) gamma dose response slopes, for many tumour sites is in the range 1.2-2.3, albeit not statistically significantly elevated from 1, while for most non-malignant endpoints the gamma dose-rate extrapolation factor is less than 1, with most estimates in the range 0.2-0.8. After neutron exposure there are non-significant indications of lower risk per unit dose at dose rates ≤5 mGy/h compared to higher dose rates for most malignant endpoints, and for all tumours (p = 0.001), and respiratory tumours (p = 0.007) this reduction is conventionally statistically significant; for most non-malignant outcomes risks per unit dose non-significantly increase at lower dose rates. Associated with this, the neutron dose-rate extrapolation factor is less than 1 for most malignant and non-malignant endpoints, in many cases statistically significantly so, with central estimates mostly in the range 0.0-0.2.
A neural network for the prediction of performance parameters of transformer cores
NASA Astrophysics Data System (ADS)
Nussbaum, C.; Booth, T.; Ilo, A.; Pfützner, H.
1996-07-01
The paper shows that Artificial Neural Networks (ANNs) may offer new possibilities for the prediction of transformer core performance parameters, i.e. no-load power losses and excitation. Basically this technique enables simulations with respect to different construction parameters most notably the characteristics of corner designs, i.e. the overlap length, the air gap length, and the number of steps. However, without additional physical knowledge incorporated into the ANN extrapolation beyond the training data limits restricts the predictive performance.
Power maps and wavefront for progressive addition lenses in eyeglass frames.
Mejía, Yobani; Mora, David A; Díaz, Daniel E
2014-10-01
To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
Minimally invasive estimation of ventricular dead space volume through use of Frank-Starling curves.
Davidson, Shaun; Pretty, Chris; Pironet, Antoine; Desaive, Thomas; Janssen, Nathalie; Lambermont, Bernard; Morimont, Philippe; Chase, J Geoffrey
2017-01-01
This paper develops a means of more easily and less invasively estimating ventricular dead space volume (Vd), an important, but difficult to measure physiological parameter. Vd represents a subject and condition dependent portion of measured ventricular volume that is not actively participating in ventricular function. It is employed in models based on the time varying elastance concept, which see widespread use in haemodynamic studies, and may have direct diagnostic use. The proposed method involves linear extrapolation of a Frank-Starling curve (stroke volume vs end-diastolic volume) and its end-systolic equivalent (stroke volume vs end-systolic volume), developed across normal clinical procedures such as recruitment manoeuvres, to their point of intersection with the y-axis (where stroke volume is 0) to determine Vd. To demonstrate the broad applicability of the method, it was validated across a cohort of six sedated and anaesthetised male Pietrain pigs, encompassing a variety of cardiac states from healthy baseline behaviour to circulatory failure due to septic shock induced by endotoxin infusion. Linear extrapolation of the curves was supported by strong linear correlation coefficients of R = 0.78 and R = 0.80 average for pre- and post- endotoxin infusion respectively, as well as good agreement between the two linearly extrapolated y-intercepts (Vd) for each subject (no more than 7.8% variation). Method validity was further supported by the physiologically reasonable Vd values produced, equivalent to 44.3-53.1% and 49.3-82.6% of baseline end-systolic volume before and after endotoxin infusion respectively. This method has the potential to allow Vd to be estimated without a particularly demanding, specialised protocol in an experimental environment. Further, due to the common use of both mechanical ventilation and recruitment manoeuvres in intensive care, this method, subject to the availability of multi-beat echocardiography, has the potential to allow for estimation of Vd in a clinical environment.
Microsecond kinetics in model single- and double-stranded amylose polymers.
Sattelle, Benedict M; Almond, Andrew
2014-05-07
Amylose, a component of starch with increasing biotechnological significance, is a linear glucose polysaccharide that self-organizes into single- and double-helical assemblies. Starch granule packing, gelation and inclusion-complex formation result from finely balanced macromolecular kinetics that have eluded precise experimental quantification. Here, graphics processing unit (GPU) accelerated multi-microsecond aqueous simulations are employed to explore conformational kinetics in model single- and double-stranded amylose. The all-atom dynamics concur with prior X-ray and NMR data while surprising and previously overlooked microsecond helix-coil, glycosidic linkage and pyranose ring exchange are hypothesized. In a dodecasaccharide, single-helical collapse was correlated with linkages and rings transitioning from their expected syn and (4)C1 chair conformers. The associated microsecond exchange rates were dependent on proximity to the termini and chain length (comparing hexa- and trisaccharides), while kinetic features of dodecasaccharide linkage and ring flexing are proposed to be a good model for polymers. Similar length double-helices were stable on microsecond timescales but the parallel configuration was sturdier than the antiparallel equivalent. In both, tertiary organization restricted local chain dynamics, implying that simulations of single amylose strands cannot be extrapolated to dimers. Unbiased multi-microsecond simulations of amylose are proposed as a valuable route to probing macromolecular kinetics in water, assessing the impact of chemical modifications on helical stability and accelerating the development of new biotechnologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peeters, A. G.; Rath, F.; Buchholz, R.
2016-08-15
It is shown that Ion Temperature Gradient turbulence close to the threshold exhibits a long time behaviour, with smaller heat fluxes at later times. This reduction is connected with the slow growth of long wave length zonal flows, and consequently, the numerical dissipation on these flows must be sufficiently small. Close to the nonlinear threshold for turbulence generation, a relatively small dissipation can maintain a turbulent state with a sizeable heat flux, through the damping of the zonal flow. Lowering the dissipation causes the turbulence, for temperature gradients close to the threshold, to be subdued. The heat flux then doesmore » not go smoothly to zero when the threshold is approached from above. Rather, a finite minimum heat flux is obtained below which no fully developed turbulent state exists. The threshold value of the temperature gradient length at which this finite heat flux is obtained is up to 30% larger compared with the threshold value obtained by extrapolating the heat flux to zero, and the cyclone base case is found to be nonlinearly stable. Transport is subdued when a fully developed staircase structure in the E × B shearing rate forms. Just above the threshold, an incomplete staircase develops, and transport is mediated by avalanche structures which propagate through the marginally stable regions.« less
Biennial-Aligned Lunisolar-Forcing of ENSO: Implications for Simplified Climate Models
NASA Astrophysics Data System (ADS)
Pukite, P. R.
2017-12-01
By solving Laplace's tidal equations along the equatorial Pacific thermocline, assuming a delayed-differential effective gravity forcing due to a combined lunar+solar (lunisolar) stimulus, we are able to precisely match ENSO periodic variations over wide intervals. The underlying pattern is difficult to decode by conventional means such as spectral analysis, which is why it has remained hidden for so long, despite the excellent agreement in the time-domain. What occurs is that a non-linear seasonal modulation with monthly and fortnightly lunar impulses along with a biennially-aligned "see-saw" is enough to cause a physical aliasing and thus multiple folding in the frequency spectrum. So, instead of a conventional spectral tidal decomposition, we opted for a time-domain cross-validating approach to calibrate the amplitude and phasing of the lunisolar cycles. As the lunar forcing consists of three fundamental periods (draconic, anomalistic, synodic), we used the measured Earth's length-of-day (LOD) decomposed and resolved at a monthly time-scale [1] to align the amplitude and phase precisely. Even slight variations from the known values of the long-period tides will degrade the fit, so a high-resolution calibration is possible. Moreover, a narrow training segment from 1880-1920 using NINO34/SOI data is adequate to extrapolate the cycles of the past 100 years (see attached figure). To further understand the biennial impact of a yearly differential-delay, we were able to also decompose using difference equations the historical sea-level-height readings at Sydney harbor to clearly expose the ENSO behavior. Finally, the ENSO lunisolar model was validated by back-extrapolating to Unified ENSO coral proxy (UEP) records dating to 1650. The quasi-biennial oscillation (QBO) behavior of equatorial stratospheric winds derives following a similar pattern to ENSO via the tidal equations, but with an emphasis on draconic forcing. This improvement in ENSO and QBO understanding has implications for vastly simplifying global climate models due to the straightforward application of a well-known and well-calibrated forcing. [1] Na, Sung-Ho, et al. "Characteristics of Perturbations in Recent Length of Day and Polar Motion." Journal of Astronomy and Space Sciences 30 (2013): 33-41.
Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method
NASA Astrophysics Data System (ADS)
Taitano, William; Knoll, Dana; Chacon, Luis
2009-11-01
The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO
Slaughter, Andrew R; Palmer, Carolyn G; Muller, Wilhelmine J
2007-04-01
In aquatic ecotoxicology, acute to chronic ratios (ACRs) are often used to predict chronic responses from available acute data to derive water quality guidelines, despite many problems associated with this method. This paper explores the comparative protectiveness and accuracy of predicted guideline values derived from the ACR, linear regression analysis (LRA), and multifactor probit analysis (MPA) extrapolation methods applied to acute toxicity data for aquatic macroinvertebrates. Although the authors of the LRA and MPA methods advocate the use of extrapolated lethal effects in the 0.01% to 10% lethal concentration (LC0.01-LC10) range to predict safe chronic exposure levels to toxicants, the use of an extrapolated LC50 value divided by a safety factor of 5 was in addition explored here because of higher statistical confidence surrounding the LC50 value. The LRA LC50/5 method was found to compare most favorably with available experimental chronic toxicity data and was therefore most likely to be sufficiently protective, although further validation with the use of additional species is needed. Values derived by the ACR method were the least protective. It is suggested that there is an argument for the replacement of ACRs in developing water quality guidelines by the LRA LC50/5 method.
Excitation of surface plasmons in Al-coated SNOM tips
NASA Astrophysics Data System (ADS)
Palm, Viktor; Rähn, Mihkel; Jäme, Joonas; Hizhnyakov, Vladimir
2012-10-01
The mesoscopic effect of spectral modulation occurring due to the interference of two photonic fiber modes filtered out by a metal-coated SNOM tip is used to observe the surface plasmon polariton (SPP) excitation in SNOM tips. In a spectrum of the broadband light transmitted by a SNOM tip a region of highly regular spectral modulation can be found, indicating the spectral interval in which only two photonic modes (apparently HE11 and TM01) are transmitted with significant and comparable amplitudes. The modulation period yields the value of optical path difference (OPD) for this pair of modes. Due to the multimode fiber's inherent modal dispersion, this OPD value depends linearly on the fiber tail length l. An additional contribution to OPD can be generated in a metal-coated SNOM tip due to a mode-dependent photon-plasmon coupling strength resulting in generation of SPPs with different propagation velocities. For an Al-coated 200 nm SNOM tip spectra of transmitted light have been registered for ten different l values. An extrapolation of the linear OPD (l) dependence to l=0 yields a significant residual OPD value, indicating according to our theoretical considerations a mode-selective SPP excitation in the metal-coated tip. The modal dispersion is shown to switch its sign in the SNOM tip. First results of analogous experiments with an Al-coated 150 nm SNOM tip confirm our conclusions.
ARSENIC MODE OF ACTION AND DEVELOPING A BBDR MODEL
The current USEPA cancer risk assessment for inorganic arsenic is based on a linear extrapolation of the epidemiological data from exposed populations in Taiwan. However, proposed key events in the mode of action (MoA) for arsenic-induced cancer (which may include altered DNA me...
The Educated Guess: Determining Drug Doses in Exotic Animals Using Evidence-Based Medicine.
Visser, Marike; Oster, Seth C
2018-05-01
Lack of species-specific pharmacokinetic and pharmacodynamic data is a challenge for pharmaceutical and dose selection. If available, dose extrapolation can be accomplished via basic equations. If unavailable, several methods have been described. Linear scaling uses an established milligrams per kilograms dose based on weight. This does not allow for differences in species drug metabolism, sometimes resulting in toxicity. Allometric scaling correlates body weight and metabolic rate but fails for drugs with significant hepatic metabolism and cannot be extrapolated to avians or reptiles. Evidence-based veterinary medicine for dose design based on species similarity is discussed, considering physiologic differences between classes. Copyright © 2018 Elsevier Inc. All rights reserved.
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2014-01-01
Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874
Frequency response of synthetic vocal fold models with linear and nonlinear material properties.
Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon
2012-10-01
The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.
Benhaim, Deborah; Grushka, Eli
2010-01-01
This study investigates lipophilicity determination by chromatographic measurements using the polar embedded Ascentis RP-Amide stationary phase. As a new generation of amide-functionalized silica stationary phase, the Ascentis RP-Amide column is evaluated as a possible substitution to the n-octanol/water partitioning system for lipophilicity measurements. For this evaluation, extrapolated retention factors, log k'w, of a set of diverse compounds were determined using different methanol contents in the mobile phase. The use of n-octanol enriched mobile phase enhances the relationship between the slope (S) of the extrapolation lines and the extrapolated log k'w (the intercept of the extrapolation),as well as the correlation between log P values and the extrapolated log k'w (1:1 correlation, r2 = 0.966).In addition, the use of isocratic retention factors, at 40% methanol in the mobile phase, provides a rapid tool for lipophilicity determination. The intermolecular interactions that contribute to the retention process in the Ascentis RP-Amide phase are characterized using the solvation parameter model of Abraham.The LSER system constants for the column are very similar to the LSER constants of the n-octanol/water extraction system. Tanaka radar plots are used for quick visual comparison of the system constants of the Ascentis RP-Amide column and the n-octanol/water extraction system. The results all indicate that the Ascentis RP-Amide stationary phase can provide reliable lipophilic data. Copyright 2009 Elsevier B.V. All rights reserved.
GIS Well Temperature Data from the Roosevelt Hot Springs, Utah FORGE Site
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gwynn, Mark; Hill, Jay; Allis, Rick
This is a GIS point feature shapefile representing wells, and their temperatures, that are located in the general Utah FORGE area near Milford, Utah. There are also fields that represent interpolated temperature values at depths of 200 m, 1000 m, 2000 m, 3000 m, and 4000 m. in degrees Fahrenheit. The temperature values at specific depths as mentioned above were derived as follows. In cases where the well reached a given depth (200 m and 1, 2, 3, or 4 km), the temperature is the measured temperature. For the shallower wells (and at deeper depths in the wells reaching onemore » or more of the target depths), temperatures were extrapolated from the temperature-depth profiles that appeared to have stable (re-equilibrated after drilling) and linear profiles within the conductive regime (i.e. below the water table or other convective influences such as shallow hydrothermal outflow from the Roosevelt Hydrothermal System). Measured temperatures/gradients from deeper wells (when available and reasonably close to a given well) were used to help constrain the extrapolation to greater depths. Most of the field names in the attribute table are intuitive, however HF = heat flow, intercept = the temperature at the surface (x-axis of the temperature-depth plots) based on the linear segment of the plot that was used to extrapolate the temperature profiles to greater depths, and depth_m is the total well depth. This information is also present in the shapefile metadata.« less
X-ray surface dose measurements using TLD extrapolation.
Kron, T; Elliot, A; Wong, T; Showell, G; Clubb, B; Metcalfe, P
1993-01-01
Surface dose measurements in therapeutic x-ray beams are of importance in determining the dose to the skin of patients undergoing radiotherapy. Measurements were performed in the 6-MV beam of a medical linear accelerator with LiF thermoluminescence dosimeters (TLD) using a solid water phantom. TLD chips (surface area 3.17 x 3.17 cm2) of three different thicknesses (0.230, 0.099, and 0.038 g/cm2) were used to extrapolate dose readings to an infinitesimally thin layer of LiF. This surface dose was measured for field sizes ranging from 1 x 1 cm2 to 40 x 40 cm2. The surface dose relative to maximum dose was found to be 10.0% for a field size of 5 x 5 cm2, 16.3% for 10 x 10 cm2, and 26.9% for 20 x 20 cm2. Using a 6-mm Perspex block tray in the beam increased the surface dose in these fields to 10.7%, 17.7%, and 34.2% respectively. Due to the small size of the TLD chips, TLD extrapolation is applicable also for intracavity and exit dose determinations. The technique used for in vivo dosimetry could provide clinicians information about the build up of dose up to 1-mm depth in addition to an extrapolated surface dose measurement.
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2011 CFR
2011-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2012 CFR
2012-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2013 CFR
2013-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
14 CFR Appendix B to Part 33 - Certification Standard Atmospheric Concentrations of Rain and Hail
Code of Federal Regulations, 2014 CFR
2014-01-01
... interpolation. Note: Source of data—Results of the Aerospace Industries Association (AIA) Propulsion Committee... above 29,000 feet is based on linearly extrapolated data. Note: Source of data—Results of the Aerospace... the Aerospace Industries Association (AIA Propulsion Committee (PC) Study, Project PC 338-1, June 1990...
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
NASA Astrophysics Data System (ADS)
Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei
2012-09-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.
325 Watts from 1-cm wide 9xx laser bars for DPSSL and FL applications
NASA Astrophysics Data System (ADS)
Lichtenstein, Norbert; Manz, Yvonne; Mauron, Pascal; Fily, Arnaud; Schmidt, Berthold E.; Mueller, Juergen; Arlt, Sebastian; Weiss, Stefan; Thies, Achim; Troger, Joerg; Harder, Christoph S.
2005-03-01
Reliable power scaling by stretching the cavity length of the laser bars ranging from 1.2 mm to 3.6 mm at constant filling factor of 50% is demonstrated. While a relatively short cavity length of 1.2 mm allows for thermally limited CW output powers in excess of 180 W, extremely high 325 W at 420 A (CW, 16°C) have been achieved by leveraging the enhanced thermal properties of a 3.6 mm cavity length on standard micro-channel coolers. A high electro-optical conversion efficiency of 62% and 51% respectively is attributed to the low internal losses from an optimized waveguide design and the excellent properties of the AlGaAs-material system accounting for low thermal and electrical resistance. Multi-cell lifetest data at various operation conditions show extremely low wear-out rates even at harsh intermittent operation conditions (1-Hz type, 50% duty-cycle, 100% modulation). At 100 W output power 300 Mshots corresponding to 64000 h mean-time-to-failure (MTTF) have been extrapolated for 20% power drop from initial 2000 h and 4000 h lifetest readouts of a 1.2 mm cavity design. Similar results have been obtained for our next generation of ultra high power laser bars enabling reliable operation at 120 W output power and beyond. From 2.4 mm cavity length bars we have obtained 250 W of CW output power at 25°C while extrapolated reliability data at 120 W and 140 W power levels of up to 2000 h duration indicates that such devices are able to fulfill the requirements for lifetimes in the 20 - 30 kh range.
Possibilities and limitations of the kinetic plot method in supercritical fluid chromatography.
De Pauw, Ruben; Desmet, Gert; Broeckhoven, Ken
2013-08-30
Although supercritical fluid chromatography (SFC) is becoming a technique of increasing importance in the field of analytical chromatography, methods to compare the performance of SFC-columns and separations in an unbiased way are not fully developed. The present study uses mathematical models to investigate the possibilities and limitations of the kinetic plot method in SFC as this easily allows to investigate a wide range of operating pressures, retention and mobile phase conditions. The variable column length (L) kinetic plot method was further investigated in this work. Since the pressure history is identical for each measurement, this method gives the true kinetic performance limit in SFC. The deviations of the traditional way of measuring the performance as a function of flow rate (fixed back pressure and column length) and the isopycnic method with respect to this variable column length method were investigated under a wide range of operational conditions. It is found that using the variable L method, extrapolations towards other pressure drops are not valid in SFC (deviation of ∼15% for extrapolation from 50 to 200bar pressure drop). The isopycnic method provides the best prediction but its use is limited when operating closer towards critical point conditions. When an organic modifier is used, the predictions are improved for both methods with respect to the variable L method (e.g. deviations decreases from 20% to 2% when 20mol% of methanol is added). Copyright © 2013 Elsevier B.V. All rights reserved.
Acceleration of convergence of vector sequences
NASA Technical Reports Server (NTRS)
Sidi, A.; Ford, W. F.; Smith, D. A.
1983-01-01
A general approach to the construction of convergence acceleration methods for vector sequence is proposed. Using this approach, one can generate some known methods, such as the minimal polynomial extrapolation, the reduced rank extrapolation, and the topological epsilon algorithm, and also some new ones. Some of the new methods are easier to implement than the known methods and are observed to have similar numerical properties. The convergence analysis of these new methods is carried out, and it is shown that they are especially suitable for accelerating the convergence of vector sequences that are obtained when one solves linear systems of equations iteratively. A stability analysis is also given, and numerical examples are provided. The convergence and stability properties of the topological epsilon algorithm are likewise given.
A Comparison Study of an Active Region Eruptive Filament and a Neighboring Non-Eruptive Filament
NASA Astrophysics Data System (ADS)
Wu, S. T.; Jiang, C.; Feng, X. S.; Hu, Q.
2014-12-01
We perform a comparison study of an eruptive filament in the core region of AR 11283 and a nearby non-eruptive filament. The coronal magnetic field supporting these two filaments is extrapolated using our data-driven CESE-MHD-NLFFF code (Jiang et al. 2013, Jiang etal. 2014), which presents two magnetic flux ropes (FRs) in the same extrapolation box. The eruptive FR contains a bald-patch separatrix surface (BPSS) spatially co-aligned very well with a pre-eruption EUV sigmoid, which is consistent with the BPSS model for the coronal sigmoids. The numerically reproduced magnetic dips of the FRs match observations of the filaments strikingly well, which supports strongly the FR-dip model for filaments. The FR that supports the AR eruptive filament is much smaller (with a length of 3 Mm) compared with the large-scale FR holding the quiescent filament (with a length of 30 Mm). But the AR eruptive FR contains most of the magnetic free energy in the extrapolation box and holds a much higher magnetic energy density than the quiescent FR, because it resides along the main polarity inversion line (PIL) around sunspots with strong magnetic shear. Both the FRs are weakly twisted and cannot trigger kink instability. The AR eruptive FR is unstable because its axis reaches above a critical height for torus instability (TI), at which the overlying closed arcades can no longer confine the FR stably. To the contrary, the quiescent FR is firmly held down by its overlying field, as its axis apex is far below the TI threshold height. (This work is partially supported by NSF AGS-1153323 and 1062050)
NASA Astrophysics Data System (ADS)
Jiang, Chao-Wei; Wu, Shi-Tsan; Feng, Xue-Shang; Hu, Qiang
2016-01-01
Solar active region (AR) 11283 is a very magnetically complex region and it has produced many eruptions. However, there exists a non-eruptive filament in the plage region just next to an eruptive one in the AR, which gives us an opportunity to perform a comparison analysis of these two filaments. The coronal magnetic field extrapolated using our CESE-MHD-NLFFF code reveals that two magnetic flux ropes (MFRs) exist in the same extrapolation box supporting these two filaments, respectively. Analysis of the magnetic field shows that the eruptive MFR contains a bald-patch separatrix surface (BPSS) cospatial very well with a pre-eruptive EUV sigmoid, which is consistent with the BPSS model for coronal sigmoids. The magnetic dips of the non-eruptive MFRs match Hα observation of the non-eruptive filament strikingly well, which strongly supports the MFR-dip model for filaments. Compared with the non-eruptive MFR/filament (with a length of about 200 Mm), the eruptive MFR/filament is much smaller (with a length of about 20 Mm), but it contains most of the magnetic free energy in the extrapolation box and holds a much higher free energy density than the non-eruptive one. Both the MFRs are weakly twisted and cannot trigger kink instability. The AR eruptive MFR is unstable because its axis reaches above a critical height for torus instability, at which the overlying closed arcades can no longer confine the MFR stably. On the contrary, the quiescent MFR is very firmly held by its overlying field, as its axis apex is far below the torus-instability threshold height. Overall, this comparison investigation supports that an MFR can exist prior to eruption and the ideal MHD instability can trigger an MFR eruption.
Joiner, Wilsaan M; Ajayi, Obafunso; Sing, Gary C; Smith, Maurice A
2011-01-01
The ability to generalize learned motor actions to new contexts is a key feature of the motor system. For example, the ability to ride a bicycle or swing a racket is often first developed at lower speeds and later applied to faster velocities. A number of previous studies have examined the generalization of motor adaptation across movement directions and found that the learned adaptation decays in a pattern consistent with the existence of motor primitives that display narrow Gaussian tuning. However, few studies have examined the generalization of motor adaptation across movement speeds. Following adaptation to linear velocity-dependent dynamics during point-to-point reaching arm movements at one speed, we tested the ability of subjects to transfer this adaptation to short-duration higher-speed movements aimed at the same target. We found near-perfect linear extrapolation of the trained adaptation with respect to both the magnitude and the time course of the velocity profiles associated with the high-speed movements: a 69% increase in movement speed corresponded to a 74% extrapolation of the trained adaptation. The close match between the increase in movement speed and the corresponding increase in adaptation beyond what was trained indicates linear hypergeneralization. Computational modeling shows that this pattern of linear hypergeneralization across movement speeds is not compatible with previous models of adaptation in which motor primitives display isotropic Gaussian tuning of motor output around their preferred velocities. Instead, we show that this generalization pattern indicates that the primitives involved in the adaptation to viscous dynamics display anisotropic tuning in velocity space and encode the gain between motor output and motion state rather than motor output itself.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K T
A relatively simple, quantitative approach is proposed to address a specific, important gap in the appr approach recommended by the USEPA Guidelines for Cancer Risk Assessment to oach address uncertainty in carcinogenic mode of action of certain chemicals when risk is extrapolated from bioassay data. These Guidelines recognize that some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained usingmore » separate 'linear' (genotoxic) vs. 'nonlinear' (nongenotoxic) approaches to low low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient t to parameterize a biologically based model that reliably o extrapolates risk to low levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach - similar to that used in reference dose procedures for classic toxicity endpoints - can address MOA uncertainty in a way that avoids explicit modeling of low low-dose risk as a function of administere administered or internal dose. Even when a 'nonlinear' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was i illustrated llustrated for a likely DMOA rodent carcinogen naphthalene, specifically to the issue of risk extrapolation from bioassay data on naphthalene naphthalene-induced nasal tumors in rats. Bioassay data, supplemental toxicokinetic data, and related physiologically based p pharmacokinetic and 2 harmacokinetic 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat rat-tumor tumor-type specific DMOA DMOA-related uncertainty were obtained using a 2-stage model adapted to reflec reflect the empirical link between genotoxic and cytotoxic effects of t the most potent identified genotoxic naphthalene metabolites, 1,2 1,2- and 1,4 1,4-naphthoquinone. Bound Bound-specific 'adjustment' factors were then used to reduce naphthalene risk estimated by linear ex extrapolation (under the default genotoxic MOA assumption), to account for the DMOA trapolation exhibited by this compound.« less
On the Duffin-Kemmer-Petiau equation with linear potential in the presence of a minimal length
NASA Astrophysics Data System (ADS)
Chargui, Yassine
2018-04-01
We point out an erroneous handling in the literature regarding solutions of the (1 + 1)-dimensional Duffin-Kemmer-Petiau equation with linear potentials in the context of quantum mechanics with minimal length. Furthermore, using Brau's approach, we present a perturbative treatment of the effect of the minimal length on bound-state solutions when a Lorentz-scalar linear potential is applied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olendski, O., E-mail: oolendski@ksu.edu.sa
2011-06-15
Highlights: > Solutions of the wave equation are analyzed for the confined circular geometry with complex Robin boundary conditions. > Sharp extremum is found in the energy dependence on the imaginary part of the extrapolation length. > Nonzero real part of the Robin length or/and magnetic field wipe out the resonance. - Abstract: Solutions of the scalar Helmholtz wave equation are derived for the analysis of the transport and thermodynamic properties of the two-dimensional disk and three-dimensional infinitely long straight wire in the external uniform longitudinal magnetic field B under the assumption that the Robin boundary condition contains extrapolation lengthmore » {Lambda} with nonzero imaginary part {Lambda}{sub i}. As a result of this complexity, the self-adjointness of the Hamiltonian is lost, its eigenvalues E become complex too and the discrete bound states of the disk characteristic for the real {Lambda} turn into the corresponding quasibound states with their lifetime defined by the eigenenergies imaginary parts E{sub i}. Accordingly, the longitudinal flux undergoes an alteration as it flows along the wire with its attenuation/amplification being E{sub i}-dependent too. It is shown that, for zero magnetic field, the component E{sub i} as a function of the Robin imaginary part exhibits a pronounced sharp extremum with its magnitude being the largest for the zero real part {Lambda}{sub r} of the extrapolation length. Increasing magnitude of {Lambda}{sub r} quenches the E{sub i} - {Lambda}{sub i} resonance and at very large {Lambda}{sub r} the eigenenergies E approach the asymptotic real values independent of {Lambda}{sub i}. The extremum is also wiped out by the magnetic field when, for the large B, the energies tend to the Landau levels. Mathematical and physical interpretations of the obtained results are provided; in particular, it is shown that the finite lifetime of the disk quasibound states stems from the {Lambda}{sub i}-induced currents flowing through the sample boundary. Possible experimental tests of the calculated effect are discussed; namely, it is argued that it can be observed in superconductors by applying to them the external electric field E normal to the surface.« less
Information content versus word length in random typing
NASA Astrophysics Data System (ADS)
Ferrer-i-Cancho, Ramon; Moscoso del Prado Martín, Fermín
2011-12-01
Recently, it has been claimed that a linear relationship between a measure of information content and word length is expected from word length optimization and it has been shown that this linearity is supported by a strong correlation between information content and word length in many languages (Piantadosi et al 2011 Proc. Nat. Acad. Sci. 108 3825). Here, we study in detail some connections between this measure and standard information theory. The relationship between the measure and word length is studied for the popular random typing process where a text is constructed by pressing keys at random from a keyboard containing letters and a space behaving as a word delimiter. Although this random process does not optimize word lengths according to information content, it exhibits a linear relationship between information content and word length. The exact slope and intercept are presented for three major variants of the random typing process. A strong correlation between information content and word length can simply arise from the units making a word (e.g., letters) and not necessarily from the interplay between a word and its context as proposed by Piantadosi and co-workers. In itself, the linear relation does not entail the results of any optimization process.
1987-12-01
have claimed an advantage to deter- mining values of k’ in 100% aqueous mobile phases by extrapolation of linear plots of log k’ vs. percent organic...im parti- cle size chemically bonded octadecylsilane (ODS) packing ( Alltech Econo- sphere). As required, this column was saturated with I-octanol by in
Accurate aging of juvenile salmonids using fork lengths
Sethi, Suresh; Gerken, Jonathon; Ashline, Joshua
2017-01-01
Juvenile salmon life history strategies, survival, and habitat interactions may vary by age cohort. However, aging individual juvenile fish using scale reading is time consuming and can be error prone. Fork length data are routinely measured while sampling juvenile salmonids. We explore the performance of aging juvenile fish based solely on fork length data, using finite Gaussian mixture models to describe multimodal size distributions and estimate optimal age-discriminating length thresholds. Fork length-based ages are compared against a validation set of juvenile coho salmon, Oncorynchus kisutch, aged by scales. Results for juvenile coho salmon indicate greater than 95% accuracy can be achieved by aging fish using length thresholds estimated from mixture models. Highest accuracy is achieved when aged fish are compared to length thresholds generated from samples from the same drainage, time of year, and habitat type (lentic versus lotic), although relatively high aging accuracy can still be achieved when thresholds are extrapolated to fish from populations in different years or drainages. Fork length-based aging thresholds are applicable for taxa for which multiple age cohorts coexist sympatrically. Where applicable, the method of aging individual fish is relatively quick to implement and can avoid ager interpretation bias common in scale-based aging.
A Guess about light quantum model
NASA Astrophysics Data System (ADS)
Yongquan, Han
2016-03-01
Photon is a ring, the diameter of the ring is the quantum fluctuated wave length. The linear movement of the ring, namely, the transmission of light, is reflected in the particle of light. A plurality of light quantum interactions or through a very narrow gap, the shape of quantum would temporarily be changed. The motion of photons to interference and diffraction phenomena occurs is determined by the structure of light quantum, the quantum ring radius and light quantum mass squared product is a constant. The smaller the light quantum ring radius is, the bigger the quality is, just consistent as the modern scientific experimental results, the energy of the purple is bigger than the red. This conclusion can be extrapolated to all of the electromagnetic wave. The shorter the photon wavelength is, the bigger the quality and density is , when the wavelength is less than 10-15 meters, it will convergence to atomic or subatomic composition material entity due to the gravity. In fact, the divergence and convergence of quantum is reversible, that is, the phenomenon of radiate ``light'' quantum occurs due to the energy exchange or other external energy. Author: hanyongquan TEL: 15611860790.
Superconductivity in the ternary germanide La3 Pd4 Ge4
NASA Astrophysics Data System (ADS)
Fujii, H.; Mochiku, T.; Takeya, H.; Sato, A.
2005-12-01
The ternary germanide La3Pd4Ge4 has been prepared by arc melting. This compound takes a body-centered lattice with an orthorhombic unit cell with the lattice parameters of a=4.2200(3)Å,b=4.3850(3)Å , and c=25.003(2)Å . The crystal structure of La3Pd4Ge4 is U3Ni4Si4 -type with the space group of Immm , consisting of the combination of structural units of AlB2 -type and BaAl4 -type layers. This compound is a type-II superconductor with a critical temperature (Tc) of 2.75 K. The lower critical field Hc1(0) is estimated to be 54 Oe. The upper critical field Hc2(0) estimated by linear extrapolation of the Hc2(T) curves is about 4.0 kOe, whereas the Werthamer-Hefland-Hohemberg theory gives Hc2(0)WHH=3.0kOe . This is an interesting observation of superconductivity in the compounds with U3Ni4Si4 -type structure. The coherence length ξ(0) of 330 Å and the penetration depth λ(0) of 2480 Å are derived.
Reaction πN → ππN near threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frlez, Emil
1993-11-01
The LAMPF E1179 experiment used the π 0 spectrometer and an array of charged particle range counters to detect and record π +π 0, π 0p, and π +π 0p coincidences following the reaction π +p → π 0π +p near threshold. The total cross sections for single pion production were measured at the incident pion kinetic energies 190, 200, 220, 240, and 260 MeV. Absolute normalizations were fixed by measuring π +p elastic scattering at 260 MeV. A detailed analysis of the π 0 detection efficiency was performed using cosmic ray calibrations and pion single charge exchange measurements with a 30 MeV π - beam. All published data on πN → ππN, including our results, are simultaneously fitted to yield a common chiral symmetry breaking parameter ξ =-0.25±0.10. The threshold matrix element |α 0(π 0π +p)| determined by linear extrapolation yields the value of the s-wave isospin-2 ππ scattering length αmore » $$2\\atop{0}$$(ππ) = -0.041±0.003 m$$-1\\atop{π}$$ -1, within the framework of soft-pion theory.« less
López-Mondéjar, Rubén; Antón, Anabel; Raidl, Stefan; Ros, Margarita; Pascual, José Antonio
2010-04-01
The species of the genus Trichoderma are used successfully as biocontrol agents against a wide range of phytopathogenic fungi. Among them, Trichoderma harzianum is especially effective. However, to develop more effective fungal biocontrol strategies in organic substrates and soil, tools for monitoring the control agents are required. Real-time PCR is potentially an effective tool for the quantification of fungi in environmental samples. The aim of this study consisted of the development and application of a real-time PCR-based method to the quantification of T. harzianum, and the extrapolation of these data to fungal biomass values. A set of primers and a TaqMan probe for the ITS region of the fungal genome were designed and tested, and amplification was correlated to biomass measurements obtained with optical microscopy and image analysis, of the hyphal length of the mycelium of the colony. A correlation of 0.76 between ITS copies and biomass was obtained. The extrapolation of the quantity of ITS copies, calculated based on real-time PCR data, into quantities of fungal biomass provides potentially a more accurate value of the quantity of soil fungi. Copyright 2009 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morales, Johnny E., E-mail: johnny.morales@lh.org.
Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less
Surface dose measurements with commonly used detectors: a consistent thickness correction method.
Reynolds, Tatsiana A; Higgins, Patrick
2015-09-08
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30-360) with other parallel plate chambers RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial. Measurements of surface dose for 6MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (-0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid-state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three-detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth-dose curves and is not recommended for these types of measurements.
Low dose radiation risks for women surviving the a-bombs in Japan: generalized additive model.
Dropkin, Greg
2016-11-24
Analyses of cancer mortality and incidence in Japanese A-bomb survivors have been used to estimate radiation risks, which are generally higher for women. Relative Risk (RR) is usually modelled as a linear function of dose. Extrapolation from data including high doses predicts small risks at low doses. Generalized Additive Models (GAMs) are flexible methods for modelling non-linear behaviour. GAMs are applied to cancer incidence in female low dose subcohorts, using anonymous public data for the 1958 - 1998 Life Span Study, to test for linearity, explore interactions, adjust for the skewed dose distribution, examine significance below 100 mGy, and estimate risks at 10 mGy. For all solid cancer incidence, RR estimated from 0 - 100 mGy and 0 - 20 mGy subcohorts is significantly raised. The response tapers above 150 mGy. At low doses, RR increases with age-at-exposure and decreases with time-since-exposure, the preferred covariate. Using the empirical cumulative distribution of dose improves model fit, and capacity to detect non-linear responses. RR is elevated over wide ranges of covariate values. Results are stable under simulation, or when removing exceptional data cells, or adjusting neutron RBE. Estimates of Excess RR at 10 mGy using the cumulative dose distribution are 10 - 45 times higher than extrapolations from a linear model fitted to the full cohort. Below 100 mGy, quasipoisson models find significant effects for all solid, squamous, uterus, corpus, and thyroid cancers, and for respiratory cancers when age-at-exposure > 35 yrs. Results for the thyroid are compatible with studies of children treated for tinea capitis, and Chernobyl survivors. Results for the uterus are compatible with studies of UK nuclear workers and the Techa River cohort. Non-linear models find large, significant cancer risks for Japanese women exposed to low dose radiation from the atomic bombings. The risks should be reflected in protection standards.
Monte Carlo calculations of energy deposition distributions of electrons below 20 keV in protein.
Tan, Zhenyu; Liu, Wei
2014-05-01
The distributions of energy depositions of electrons in semi-infinite bulk protein and the radial dose distributions of point-isotropic mono-energetic electron sources [i.e., the so-called dose point kernel (DPK)] in protein have been systematically calculated in the energy range below 20 keV, based on Monte Carlo methods. The ranges of electrons have been evaluated by extrapolating two calculated distributions, respectively, and the evaluated ranges of electrons are compared with the electron mean path length in protein which has been calculated by using electron inelastic cross sections described in this work in the continuous-slowing-down approximation. It has been found that for a given energy, the electron mean path length is smaller than the electron range evaluated from DPK, but it is large compared to the electron range obtained from the energy deposition distributions of electrons in semi-infinite bulk protein. The energy dependences of the extrapolated electron ranges based on the two investigated distributions are given, respectively, in a power-law form. In addition, the DPK in protein has also been compared with that in liquid water. An evident difference between the two DPKs is observed. The calculations presented in this work may be useful in studies of radiation effects on proteins.
Development and Testing of a Sustained Release System for the Prevention of Malaria.
1979-09-01
linear function of time to 100% excretion the extrapolated dur- ation of the control group would be 517 days (203 days/0.393). As used in leprosy ...use in leprosy treatment, the suspending vehicle is 40% benzyl benzoate, 60% castor oil. Solubility of WR-4593 in water is given as 3.0 pg/ml while in
Clinical utility of the AlphaFIM® instrument in stroke rehabilitation.
Lo, Alexander; Tahair, Nicola; Sharp, Shelley; Bayley, Mark T
2012-02-01
The AlphaFIM instrument is an assessment tool designed to facilitate discharge planning of stroke patients from acute care, by extrapolating overall functional status from performance in six key Functional Independence Measure (FIM) instrument items. To determine whether acute care AlphaFIM rating is correlated to stroke rehabilitation outcomes. In this prospective observational study, data were analyzed from 891 patients referred for inpatient stroke rehabilitation through an Internet-based referral system. Simple linear and stepwise regression models determined correlations between rehabilitation-ready AlphaFIM rating and rehabilitation outcomes (admission and discharge FIM ratings, FIM gain, FIM efficiency, and length of stay). Covariates including demographic data, stroke characteristics, medical history, cognitive deficits, and activity tolerance were included in the stepwise regressions. The AlphaFIM instrument was significant in predicting admission and discharge FIM ratings at rehabilitation (adjusted R² 0.40 and 0.28, respectively; P < 0.0001) and was weakly correlated with FIM gain and length of stay (adjusted R² 0.04 and 0.09, respectively; P < 0.0001), but not FIM efficiency. AlphaFIM rating was inversely related to FIM gain. Age, bowel incontinence, left hemiparesis, and previous infarcts were negative predictors of discharge FIM rating on stepwise regression. Intact executive function and physical activity tolerance of 30 to 60 mins were predictors of FIM gain. The AlphaFIM instrument is a valuable tool for triaging stroke patients from acute care to rehabilitation and predicts functional status at discharge from rehabilitation. Patients with low AlphaFIM ratings have the potential to make significant functional gains and should not be denied admission to inpatient rehabilitation programs.
Proton radius from electron scattering data
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad
2016-05-01
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.
NASA Astrophysics Data System (ADS)
Häberlen, Oliver D.; Chung, Sai-Cheong; Stener, Mauro; Rösch, Notker
1997-03-01
A series of gold clusters spanning the size range from Au6 through Au147 (with diameters from 0.7 to 1.7 nm) in icosahedral, octahedral, and cuboctahedral structure has been theoretically investigated by means of a scalar relativistic all-electron density functional method. One of the main objectives of this work was to analyze the convergence of cluster properties toward the corresponding bulk metal values and to compare the results obtained for the local density approximation (LDA) to those for a generalized gradient approximation (GGA) to the exchange-correlation functional. The average gold-gold distance in the clusters increases with their nuclearity and correlates essentially linearly with the average coordination number in the clusters. An extrapolation to the bulk coordination of 12 yields a gold-gold distance of 289 pm in LDA, very close to the experimental bulk value of 288 pm, while the extrapolated GGA gold-gold distance is 297 pm. The cluster cohesive energy varies linearly with the inverse of the calculated cluster radius, indicating that the surface-to-volume ratio is the primary determinant of the convergence of this quantity toward bulk. The extrapolated LDA binding energy per atom, 4.7 eV, overestimates the experimental bulk value of 3.8 eV, while the GGA value, 3.2 eV, underestimates the experiment by almost the same amount. The calculated ionization potentials and electron affinities of the clusters may be related to the metallic droplet model, although deviations due to the electronic shell structure are noticeable. The GGA extrapolation to bulk values yields 4.8 and 4.9 eV for the ionization potential and the electron affinity, respectively, remarkably close to the experimental polycrystalline work function of bulk gold, 5.1 eV. Gold 4f core level binding energies were calculated for sites with bulk coordination and for different surface sites. The core level shifts for the surface sites are all positive and distinguish among the corner, edge, and face-centered sites; sites in the first subsurface layer show still small positive shifts.
Accuracy of topological entanglement entropy on finite cylinders.
Jiang, Hong-Chen; Singh, Rajiv R P; Balents, Leon
2013-09-06
Topological phases are unique states of matter which support nonlocal excitations which behave as particles with fractional statistics. A universal characterization of gapped topological phases is provided by the topological entanglement entropy (TEE). We study the finite size corrections to the TEE by focusing on systems with a Z2 topological ordered state using density-matrix renormalization group and perturbative series expansions. We find that extrapolations of the TEE based on the Renyi entropies with a Renyi index of n≥2 suffer from much larger finite size corrections than do extrapolations based on the von Neumann entropy. In particular, when the circumference of the cylinder is about ten times the correlation length, the TEE obtained using von Neumann entropy has an error of order 10(-3), while for Renyi entropies it can even exceed 40%. We discuss the relevance of these findings to previous and future searches for topological ordered phases, including quantum spin liquids.
Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil
2004-01-01
A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.
NASA Astrophysics Data System (ADS)
Blake, Thomas A.; Chackerian, Charles, Jr.; Podolske, James R.
1996-02-01
Mid-infrared magnetic rotation spectroscopy (MRS) experiments on nitric oxide (NO) are quantitatively modeled by theoretical calculations. The verified theory is used to specify an instrument that can make in situ measurements on NO and NO2 in the Earth's atmosphere at a sensitivity level of a few parts in 1012 by volume per second. The prototype instrument used in the experiments has an extrapolated detection limit for NO of 30 parts in 109 for a 1-s integration time over a 12-cm path length. The detection limit is an extrapolation of experimental results to a signal-to-noise ratio of one, where the noise is considered to be one-half the peak-to-peak baseline noise. Also discussed are the various factors that can limit the sensitivity of a MRS spectrometer that uses liquid-nitrogen-cooled lead-salt diode lasers and photovoltaic detectors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valoppi, L.; Carlisle, J.; Polisini, J.
1995-12-31
A component of both human health and ecological risk assessments is the evaluation of toxicity values. A comparison between the methodology for the development of Reference Doses (RfDs) to be protective of humans, and that developed for vertebrate wildlife species is presented. For all species, a chronic No Observable Adverse Effect Level (NOAEL) is developed by applying uncertainty factors (UFs) to literature-based toxicity values. Uncertainty factors are used to compensate for the length of exposure, sensitivity of endpoints, and cross-species extrapolations between the test species and the species being assessed. Differences between human and wildlife species could include the toxicologicalmore » endpoint, the critical study, and the magnitude of the cross-species extrapolation factor. Case studies for select chemicals are presented which contrast RfDs developed for humans and those developed for avian and mammalian wildlife.« less
Electronic and spectroscopic characterizations of SNP isomers
NASA Astrophysics Data System (ADS)
Trabelsi, Tarek; Al Mogren, Muneerah Mogren; Hochlaf, Majdi; Francisco, Joseph S.
2018-02-01
High-level ab initio electronic structure calculations were performed to characterize SNP isomers. In addition to the known linear SNP, cyc-PSN, and linear SPN isomers, we identified a fourth isomer, linear PSN, which is located ˜2.4 eV above the linear SNP isomer. The low-lying singlet and triplet electronic states of the linear SNP and SPN isomers were investigated using a multi-reference configuration interaction method and large basis set. Several bound electronic states were identified. However, their upper rovibrational levels were predicted to pre-dissociate, leading to S + PN, P + NS products, and multi-step pathways were discovered. For the ground states, a set of spectroscopic parameters were derived using standard and explicitly correlated coupled-cluster methods in conjunction with augmented correlation-consistent basis sets extrapolated to the complete basis set limit. We also considered scalar and core-valence effects. For linear isomers, the rovibrational spectra were deduced after generation of their 3D-potential energy surfaces along the stretching and bending coordinates and variational treatments of the nuclear motions.
SYSTEMATIC AND STOCHASTIC VARIATIONS IN PULSAR DISPERSION MEASURES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lam, M. T.; Cordes, J. M.; Chatterjee, S.
2016-04-10
We analyze deterministic and random temporal variations in the dispersion measure (DM) from the full three-dimensional velocities of pulsars with respect to the solar system, combined with electron-density variations over a wide range of length scales. Previous treatments have largely ignored pulsars’ changing distances while favoring interpretations involving changes in sky position from transverse motion. Linear trends in pulsar DMs observed over 5–10 year timescales may signify sizable DM gradients in the interstellar medium (ISM) sampled by the changing direction of the line of sight to the pulsar. We show that motions parallel to the line of sight can alsomore » account for linear trends, for the apparent excess of DM variance over that extrapolated from scintillation measurements, and for the apparent non-Kolmogorov scalings of DM structure functions inferred in some cases. Pulsar motions through atomic gas may produce bow-shock ionized gas that also contributes to DM variations. We discuss the possible causes of periodic or quasi-periodic changes in DM, including seasonal changes in the ionosphere, annual variations of the solar elongation angle, structure in the heliosphere and ISM boundary, and substructure in the ISM. We assess the solar cycle’s role on the amplitude of ionospheric and solar wind variations. Interstellar refraction can produce cyclic timing variations from the error in transforming arrival times to the solar system barycenter. We apply our methods to DM time series and DM gradient measurements in the literature and assess their consistency with a Kolmogorov medium. Finally, we discuss the implications of DM modeling in precision pulsar timing experiments.« less
Battaglia, P; Malara, D; Ammendolia, G; Romeo, T; Andaloro, F
2015-09-01
Length-mass relationships and linear regressions are given for otolith size (length and height) and standard length (LS ) of certain mesopelagic fishes (Myctophidae, Paralepididae, Phosichthyidae and Stomiidae) living in the central Mediterranean Sea. The length-mass relationship showed isometric growth in six species, whereas linear regressions of LS and otolith size fit the data well for all species. These equations represent a useful tool for dietary studies on Mediterranean marine predators. © 2015 The Fisheries Society of the British Isles.
The contribution of benzene to smoking-induced leukemia.
Korte, J E; Hertz-Picciotto, I; Schulz, M R; Ball, L M; Duell, E J
2000-04-01
Cigarette smoking is associated with an increased risk of leukemia; benzene, an established leukemogen, is present in cigarette smoke. By combining epidemiologic data on the health effects of smoking with risk assessment techniques for low-dose extrapolation, we assessed the proportion of smoking-induced total leukemia and acute myeloid leukemia (AML) attributable to the benzene in cigarette smoke. We fit both linear and quadratic models to data from two benzene-exposed occupational cohorts to estimate the leukemogenic potency of benzene. Using multiple-decrement life tables, we calculated lifetime risks of total leukemia and AML deaths for never, light, and heavy smokers. We repeated these calculations, removing the effect of benzene in cigarettes based on the estimated potencies. From these life tables we determined smoking-attributable risks and benzene-attributable risks. The ratio of the latter to the former constitutes the proportion of smoking-induced cases attributable to benzene. Based on linear potency models, the benzene in cigarette smoke contributed from 8 to 48% of smoking-induced total leukemia deaths [95% upper confidence limit (UCL), 20-66%], and from 12 to 58% of smoking-induced AML deaths (95% UCL, 19-121%). The inclusion of a quadratic term yielded results that were comparable; however, potency models with only quadratic terms resulted in much lower attributable fractions--all < 1%. Thus, benzene is estimated to be responsible for approximately one-tenth to one-half of smoking-induced total leukemia mortality and up to three-fifths of smoking-related AML mortality. In contrast to theoretical arguments that linear models substantially overestimate low-dose risk, linear extrapolations from empirical data over a dose range of 10- to 100-fold resulted in plausible predictions.
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
NASA Astrophysics Data System (ADS)
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; ...
2016-02-18
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks providemore » the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. As a result, testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.« less
Inorganic arsenic is classified as a carcinogen and has been linked to lung and bladder cancer as well as other non-cancerous health effects. Because of these health effects the U.S. EPA has set a Maximum Contaminant Level (MCL) at 10ppb based on a linear extrapolation of risk an...
Complexation of Polyelectrolyte Micelles with Oppositely Charged Linear Chains.
Kalogirou, Andreas; Gergidis, Leonidas N; Miliou, Kalliopi; Vlahos, Costas
2017-03-02
The formation of interpolyelectrolyte complexes (IPECs) from linear AB diblock copolymer precursor micelles and oppositely charged linear homopolymers is studied by means of molecular dynamics simulations. All beads of the linear polyelectrolyte (C) are charged with elementary quenched charge +1e, whereas in the diblock copolymer only the solvophilic (A) type beads have quenched charge -1e. For the same Bjerrum length, the ratio of positive to negative charges, Z +/- , of the mixture and the relative length of charged moieties r determine the size of IPECs. We found a nonmonotonic variation of the size of the IPECs with Z +/- . For small Z +/- values, the IPECs retain the size of the precursor micelle, whereas at larger Z +/- values the IPECs decrease in size due to the contraction of the corona and then increase as the aggregation number of the micelle increases. The minimum size of the IPECs is obtained at lower Z +/- values when the length of the hydrophilic block of the linear diblock copolymer decreases. The aforementioned findings are in agreement with experimental results. At a smaller Bjerrum length, we obtain the same trends but at even smaller Z +/- values. The linear homopolymer charged units are distributed throughout the corona.
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Bak, M. J.; Nakazawa, S.; Banerjee, P. K.
1984-01-01
A 3-D inelastic analysis methods program consists of a series of computer codes embodying a progression of mathematical models (mechanics of materials, special finite element, boundary element) for streamlined analysis of combustor liners, turbine blades, and turbine vanes. These models address the effects of high temperatures and thermal/mechanical loadings on the local (stress/strain) and global (dynamics, buckling) structural behavior of the three selected components. These models are used to solve 3-D inelastic problems using linear approximations in the sense that stresses/strains and temperatures in generic modeling regions are linear functions of the spatial coordinates, and solution increments for load, temperature and/or time are extrapolated linearly from previous information. Three linear formulation computer codes, referred to as MOMM (Mechanics of Materials Model), MHOST (MARC-Hot Section Technology), and BEST (Boundary Element Stress Technology), were developed and are described.
Soft tissue modelling through autowaves for surgery simulation.
Zhong, Yongmin; Shirinzadeh, Bijan; Alici, Gursel; Smith, Julian
2006-09-01
Modelling of soft tissue deformation is of great importance to virtual reality based surgery simulation. This paper presents a new methodology for simulation of soft tissue deformation by drawing an analogy between autowaves and soft tissue deformation. The potential energy stored in a soft tissue as a result of a deformation caused by an external force is propagated among mass points of the soft tissue by non-linear autowaves. The novelty of the methodology is that (i) autowave techniques are established to describe the potential energy distribution of a deformation for extrapolating internal forces, and (ii) non-linear materials are modelled with non-linear autowaves other than geometric non-linearity. Integration with a haptic device has been achieved to simulate soft tissue deformation with force feedback. The proposed methodology not only deals with large-range deformations, but also accommodates isotropic, anisotropic and inhomogeneous materials by simply changing diffusion coefficients.
Magnetic Nulls and Super-radial Expansion in the Solar Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Sarah E.; Dalmasse, Kevin; Tomczyk, Steven
Magnetic fields in the Sun’s outer atmosphere—the corona—control both solar-wind acceleration and the dynamics of solar eruptions. We present the first clear observational evidence of coronal magnetic nulls in off-limb linearly polarized observations of pseudostreamers, taken by the Coronal Multichannel Polarimeter (CoMP) telescope. These nulls represent regions where magnetic reconnection is likely to act as a catalyst for solar activity. CoMP linear-polarization observations also provide an independent, coronal proxy for magnetic expansion into the solar wind, a quantity often used to parameterize and predict the solar wind speed at Earth. We introduce a new method for explicitly calculating expansion factorsmore » from CoMP coronal linear-polarization observations, which does not require photospheric extrapolations. We conclude that linearly polarized light is a powerful new diagnostic of critical coronal magnetic topologies and the expanding magnetic flux tubes that channel the solar wind.« less
Magnetic Nulls and Super-Radial Expansion in the Solar Corona
NASA Technical Reports Server (NTRS)
Gibson, Sarah E.; Dalmasse, Kevin; Rachmeler, Laurel A.; De Rosa, Marc L.; Tomczyk, Steven; De Toma, Giuliana; Burkepile, Joan; Galloy, Michael
2017-01-01
Magnetic fields in the Sun's outer atmosphere, the corona, control both solar-wind acceleration and the dynamics of solar eruptions. We present the first clear observational evidence of coronal magnetic nulls in off-limb linearly polarized observations of pseudostreamers, taken by the Coronal Multichannel Polarimeter (CoMP) telescope. These nulls represent regions where magnetic reconnection is likely to act as a catalyst for solar activity.CoMP linear-polarization observations also provide an independent, coronal proxy for magnetic expansion into the solar wind, a quantity often used to parameterize and predict the solar wind speed at Earth. We introduce a new method for explicitly calculating expansion factors from CoMP coronal linear-polarization observations, which does not require photospheric extrapolations. We conclude that linearly polarized light is a powerful new diagnostic of critical coronal magnetic topologies and the expanding magnetic flux tubes that channel the solar wind.
MMOC- MODIFIED METHOD OF CHARACTERISTICS SONIC BOOM EXTRAPOLATION
NASA Technical Reports Server (NTRS)
Darden, C. M.
1994-01-01
The Modified Method of Characteristics Sonic Boom Extrapolation program (MMOC) is a sonic boom propagation method which includes shock coalescence and incorporates the effects of asymmetry due to volume and lift. MMOC numerically integrates nonlinear equations from data at a finite distance from an airplane configuration at flight altitude to yield the sonic boom pressure signature at ground level. MMOC accounts for variations in entropy, enthalpy, and gravity for nonlinear effects near the aircraft, allowing extrapolation to begin nearer the body than in previous methods. This feature permits wind tunnel sonic boom models of up to three feet in length, enabling more detailed, realistic models than the previous six-inch sizes. It has been shown that elongated airplanes flying at high altitude and high Mach numbers can produce an acceptably low sonic boom. Shock coalescence in MMOC includes three-dimensional effects. The method is based on an axisymmetric solution with asymmetric effects determined by circumferential derivatives of the standard shock equations. Bow shocks and embedded shocks can be included in the near-field. The method of characteristics approach in MMOC allows large computational steps in the radial direction without loss of accuracy. MMOC is a propagation method rather than a predictive program. Thus input data (the flow field on a cylindrical surface at approximately one body length from the axis) must be supplied from calculations or experimental results. The MMOC package contains a uniform atmosphere pressure field program and interpolation routines for computing the required flow field data. Other user supplied input to MMOC includes Mach number, flow angles, and temperature. MMOC output tabulates locations of bow shocks and embedded shocks. When the calculations reach ground level, the overpressure and distance are printed, allowing the user to plot the pressure signature. MMOC is written in FORTRAN IV for batch execution and has been implemented on a CDC 170 series computer operating under NOS with a central memory requirement of approximately 223K of 60 bit words. This program was developed in 1983.
Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand
NASA Astrophysics Data System (ADS)
Reed, A. H.; Pandey, R. B.; Lavoie, D. L.
Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.
2011-03-01
following: disturbance of sensitive environments (including wildlife); dredging up potentially contaminated sediments; physical contact with UXO; damaging or...is discussed below. 2.1.1 EM61 System and Sensors The EM61 is a high-resolution time-domain electromagnetic metal detector that is capable of...the position of the tow boat and then try to extrapolate the position of the detector based on cable length and GPS heading. In most cases, the
Surface dose measurements with commonly used detectors: a consistent thickness correction method
Higgins, Patrick
2015-01-01
The purpose of this study was to review application of a consistent correction method for the solid state detectors, such as thermoluminescent dosimeters (chips (cTLD) and powder (pTLD)), optically stimulated detectors (both closed (OSL) and open (eOSL)), and radiochromic (EBT2) and radiographic (EDR2) films. In addition, to compare measured surface dose using an extrapolation ionization chamber (PTW 30‐360) with other parallel plate chambers RMI‐449 (Attix), Capintec PS‐033, PTW 30‐329 (Markus) and Memorial. Measurements of surface dose for 6 MV photons with parallel plate chambers were used to establish a baseline. cTLD, OSLs, EDR2, and EBT2 measurements were corrected using a method which involved irradiation of three dosimeter stacks, followed by linear extrapolation of individual dosimeter measurements to zero thickness. We determined the magnitude of correction for each detector and compared our results against an alternative correction method based on effective thickness. All uncorrected surface dose measurements exhibited overresponse, compared with the extrapolation chamber data, except for the Attix chamber. The closest match was obtained with the Attix chamber (−0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EBT2 (14%), EDR2 (14.8%), and OSL (26%). Application of published ionization chamber corrections brought all the parallel plate results to within 1% of the extrapolation chamber. The extrapolation method corrected all solid‐state detector results to within 2% of baseline, except the OSLs. Extrapolation of dose using a simple three‐detector stack has been demonstrated to provide thickness corrections for cTLD, eOSLs, EBT2, and EDR2 which can then be used for surface dose measurements. Standard OSLs are not recommended for surface dose measurement. The effective thickness method suffers from the subjectivity inherent in the inclusion of measured percentage depth‐dose curves and is not recommended for these types of measurements. PACS number: 87.56.‐v PMID:26699319
CFD predictions of near-field pressure signatures of a low-boom aircraft
NASA Technical Reports Server (NTRS)
Fouladi, Kamran; Baize, Daniel G.
1992-01-01
A three dimensional Euler marching code has been utilized to predict near-field pressure signatures of an aircraft with low boom characteristics. Computations were extended to approximately six body lengths aft of the aircraft in order to obtain pressure data at three body lengths below the aircraft for a cruise Mach number of 1.6. The near-field pressure data were extrapolated to the ground using a Whitham based method. The distance below the aircraft where the pressure data are attained is defined in this paper as the 'separation distance.' The influences of separation distances and the still highly three-dimensional flow field on the predicted ground pressure signatures and boom loudness are presented in this paper.
Detectors for Linear Colliders: Tracking and Vertexing (2/4)
Battaglia, Marco
2018-04-16
Efficient and precise determination of the flavour of partons in multi-hadron final states is essential to the anticipated LC physics program. This makes tracking in the vicinity of the interaction region of great importance. Tracking extrapolation and momentum resolution are specified by precise physics requirements. The R&D; towards detectors able to meet these specifications will be discussed, together with some of their application beyond particle physics.
Motion prediction in MRI-guided radiotherapy based on interleaved orthogonal cine-MRI
NASA Astrophysics Data System (ADS)
Seregni, M.; Paganelli, C.; Lee, D.; Greer, P. B.; Baroni, G.; Keall, P. J.; Riboldi, M.
2016-01-01
In-room cine-MRI guidance can provide non-invasive target localization during radiotherapy treatment. However, in order to cope with finite imaging frequency and system latencies between target localization and dose delivery, tumour motion prediction is required. This work proposes a framework for motion prediction dedicated to cine-MRI guidance, aiming at quantifying the geometric uncertainties introduced by this process for both tumour tracking and beam gating. The tumour position, identified through scale invariant features detected in cine-MRI slices, is estimated at high-frequency (25 Hz) using three independent predictors, one for each anatomical coordinate. Linear extrapolation, auto-regressive and support vector machine algorithms are compared against systems that use no prediction or surrogate-based motion estimation. Geometric uncertainties are reported as a function of image acquisition period and system latency. Average results show that the tracking error RMS can be decreased down to a [0.2; 1.2] mm range, for acquisition periods between 250 and 750 ms and system latencies between 50 and 300 ms. Except for the linear extrapolator, tracking and gating prediction errors were, on average, lower than those measured for surrogate-based motion estimation. This finding suggests that cine-MRI guidance, combined with appropriate prediction algorithms, could relevantly decrease geometric uncertainties in motion compensated treatments.
Alternative Method to Simulate a Sub-idle Engine Operation in Order to Synthesize Its Control System
NASA Astrophysics Data System (ADS)
Sukhovii, Sergii I.; Sirenko, Feliks F.; Yepifanov, Sergiy V.; Loboda, Igor
2016-09-01
The steady-state and transient engine performances in control systems are usually evaluated by applying thermodynamic engine models. Most models operate between the idle and maximum power points, only recently, they sometimes address a sub-idle operating range. The lack of information about the component maps at the sub-idle modes presents a challenging problem. A common method to cope with the problem is to extrapolate the component performances to the sub-idle range. Precise extrapolation is also a challenge. As a rule, many scientists concern only particular aspects of the problem such as the lighting combustion chamber or the turbine operation under the turned-off conditions of the combustion chamber. However, there are no reports about a model that considers all of these aspects and simulates the engine starting. The proposed paper addresses a new method to simulate the starting. The method substitutes the non-linear thermodynamic model with a linear dynamic model, which is supplemented with a simplified static model. The latter model is the set of direct relations between parameters that are used in the control algorithms instead of commonly used component performances. Specifically, this model consists of simplified relations between the gas path parameters and the corrected rotational speed.
Characteristics of enhanced-mode AlGaN/GaN MIS HEMTs for millimeter wave applications
NASA Astrophysics Data System (ADS)
Lee, Jong-Min; Ahn, Ho-Kyun; Jung, Hyun-Wook; Shin, Min Jeong; Lim, Jong-Won
2017-09-01
In this paper, an enhanced-mode (E-mode) AlGaN/GaN high electron mobility transistor (HEMT) was developed by using 4-inch GaN HEMT process. We designed and fabricated Emode HEMTs and characterized device performance. To estimate the possibility of application for millimeter wave applications, we focused on the high frequency performance and power characteristics. To shift the threshold voltage of HEMTs we applied the Al2O3 insulator to the gate structure and adopted the gate recess technique. To increase the frequency performance the e-beam lithography technique was used to define the 0.15 um gate length. To evaluate the dc and high frequency performance, electrical characterization was performed. The threshold voltage was measured to be positive value by linear extrapolation from the transfer curve. The device leakage current is comparable to that of the depletion mode device. The current gain cut-off frequency and the maximum oscillation frequency of the E-mode device with a total gate width of 150 um were 55 GHz and 168 GHz, respectively. To confirm the power performance for mm-wave applications the load-pull test was performed. The measured power density of 2.32 W/mm was achieved at frequencies of 28 and 30 GHz.
Measurement of Initial Conditions at Nozzle Exit of High Speed Jets
NASA Technical Reports Server (NTRS)
Panda, J.; Zaman, K. B. M. Q.; Seasholtz, R. G.
2004-01-01
The time averaged and unsteady density fields close to the nozzle exit (0.1 less than or = x/D less than or = 2, x: downstream distance, D: jet diameter) of unheated free jets at Mach numbers of 0.95, 1.4, and 1.8 were measured using a molecular Rayleigh scattering based technique. The initial thickness of shear layer and its linear growth rate were determined from time-averaged density survey and a modeling process, which utilized the Crocco-Busemann equation to relate density profiles to velocity profiles. The model also corrected for the smearing effect caused by a relatively long probe length in the measured density data. The calculated shear layer thickness was further verified from a limited hot-wire measurement. Density fluctuations spectra, measured using a two-Photomultiplier-tube technique, were used to determine evolution of turbulent fluctuations in various Strouhal frequency bands. For this purpose spectra were obtained from a large number of points inside the flow; and at every axial station spectral data from all radial positions were integrated. The radially-integrated fluctuation data show an exponential growth with downstream distance and an eventual saturation in all Strouhal frequency bands. The initial level of density fluctuations was calculated by extrapolation to nozzle exit.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Cigarette sales in pharmacies in the USA (2005-2009).
Seidenberg, Andrew B; Behm, Ilan; Rees, Vaughan W; Connolly, Gregory N
2012-09-01
Several US jurisdictions have adopted policies prohibiting pharmacies from selling tobacco products. Little is known about how pharmacies contribute to total cigarette sales. Pharmacy and total cigarette sales in the USA were tabulated from AC Nielsen and Euromonitor, respectively, for the years 2005-2009. Linear regression was used to characterise trends over time, with observed trends extrapolated to 2020. Between 2005 and 2009, pharmacy cigarette sales increased 22.72% (p=0.004), while total cigarette sales decreased 17.43% (p=0.015). In 2005, pharmacy cigarette sales represented 3.05% of total cigarette sales, increasing to 4.54% by 2009. Extrapolation of these findings resulted in estimated pharmacy cigarette sales of 14.59% of total US cigarette sales by 2020. Cigarette sales in American pharmacies have risen in recent years, while cigarette sales nationally have declined. If current trends continue, pharmacy cigarette market share will, by 2020, increase to more than four times the 2005 share.
Chenal, C; Legue, F; Nourgalieva, K; Brouazin-Jousseaume, V; Durel, S; Guitton, N
2000-01-01
In human radiation protection, the shape of the dose effects curve for low doses irradiation (LDI) is assumed to be linear, extrapolated from the clinical consequences of Hiroshima and Nagasaki nuclear explosions. This extrapolation probably overestimates the risk below 200 mSv. In many circumstances, the living species and cells can develop some mechanisms of adaptation. Classical epidemiological studies will not be able to answer the question and there is a need to assess more sensitive biological markers of the effects of LDI. The researches should be focused on DNA effects (strand breaks), radioinduced expression of new genes and proteins involved in the response to oxidative stress and DNA repair mechanisms. New experimental biomolecular techniques should be developed in parallel with more conventional ones. Such studies would permit to assess new biological markers of radiosensitivity, which could be of great interest in radiation protection and radio-oncology.
Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin
2014-01-01
Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742
Sticher, P; Jaspers, M C; Stemmler, K; Harms, H; Zehnder, A J; van der Meer, J R
1997-01-01
A microbial whole-cell biosensor was developed, and its potential to measure water-dissolved concentrations of middle-chain-length alkanes and some related compounds by bioluminescence was characterized. The biosensor strain Escherichia coli DH5 alpha(pGEc74, pJAMA7) carried the regulatory gene alkS from Pseudomonas oleovorans and a transcriptional fusion of PalkB from the same strain with the promoterless luciferase luxAB genes from Vibrio harveyi on two separately introduced plasmids. In standardized assays, the biosensor cells were readily inducible with octane, a typical inducer of the alk system. Light emission after induction periods of more than 15 min correlated well with octane concentration. In well-defined aqueous samples, there was a linear relationship between light output and octane concentrations between 24 and 100 nM. The biosensor responded to middle-chain-length alkanes but not to alicyclic or aromatic compounds. In order to test its applicability for analyzing environmentally relevant samples, the biosensor was used to detect the bioavailable concentration of alkanes in heating oil-contaminated groundwater samples. By the extrapolation of calibrated light output data to low octane concentrations with a hyperbolic function, a total inducer concentration of about 3 nM in octane equivalents was estimated. The whole-cell biosensor tended to underestimate the alkane concentration in the groundwater samples by about 25%, possibly because of the presence of unknown inhibitors. This was corrected for by spiking the samples with a known amount of an octane standard. Biosensor measurements of alkane concentrations were further verified by comparing them with the results of chemical analyses. PMID:9327569
Hallemans, Ann; Verbecque, Evi; Dumas, Raphael; Cheze, Laurence; Van Hamme, Angèle; Robert, Thomas
2018-06-01
Immature balance control is considered an important rate limiter for maturation of gait. The spatial margin of stability (MoS) is a biomechanical measure of dynamic balance control that might provide insights into balance control strategies used by children during the developmental course of gait. We hypothesize there will be an age-dependent decrease in MoS in children with typical development. To understand the mechanics, relations between MoS and spatio-temporal parameters of gait are investigated. Total body gait analysis of typically developing children (age 1-10, n = 84) were retrospectively selected from available databases. MoS is defined as the minimum distance between the center of pressure and the extrapolated center of mass along the mediolateral axis during the single support phases. MoS shows a moderate negative correlation with stride length (rho = -0.510), leg length (rho = -0.440), age (rho = -0.368) and swing duration (rho = -0.350). A weak correlation was observed between MoS and walking speed (rho = -0.243) and step width (rho = 0.285). A stepwise linear regression model showed only one predictor, swing duration, explaining 18% of the variance in MoS. MoS decreases with increasing duration of swing (β = -0.422). This relation is independent of age. A larger MoS induces a larger lateral divergence of the CoM that could be compensated by a quicker step. Future research should compare the observed strategies in children to those used in adults and in children with altered balance control related to pathology. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
NASA Astrophysics Data System (ADS)
Rhinelander, Marcus Q.; Dawson, Stephen M.
2004-04-01
Multiple pulses can often be distinguished in the clicks of sperm whales (Physeter macrocephalus). Norris and Harvey [in Animal Orientation and Navigation, NASA SP-262 (1972), pp. 397-417] proposed that this results from reflections within the head, and thus that interpulse interval (IPI) is an indicator of head length, and by extrapolation, total length. For this idea to hold, IPIs must be stable within individuals, but differ systematically among individuals of different size. IPI stability was examined in photographically identified individuals recorded repeatedly over different dives, days, and years. IPI variation among dives in a single day and days in a single year was statistically significant, although small in magnitude (it would change total length estimates by <3%). As expected, IPIs varied significantly among individuals. Most individuals showed significant increases in IPIs over several years, suggesting growth. Mean total lengths calculated from published IPI regressions were 13.1 to 16.1 m, longer than photogrammetric estimates of the same whales (12.3 to 15.3 m). These discrepancies probably arise from the paucity of large (12-16 m) whales in data used in published regressions. A new regression is offered for this size range.
Luo, Ying-zhen; Tu, Meng; Fan, Fei; Zheng, Jie-qian; Yang, Ming; Li, Tao; Zhang, Kui; Deng, Zhen-hua
2015-06-01
To establish the linear regression equation between body height and combined length of manubrium and mesostenum of sternum measured by CT volume rendering technique (CT-VRT) in southwest Han population. One hundred and sixty subjects, including 80 males and 80 females were selected from southwest Han population for routine CT-VRT (reconstruction thickness 1 mm) examination. The lengths of both manubrium and mesosternum were recorded, and the combined length of manubrium and mesosternum was equal to the algebraic sum of them. The sex-specific linear regression equations between the combined length of manubrium and mesosternum and the real body height of each subject were deduced. The sex-specific simple linear regression equations between the combined length of manubrium and mesostenum (x3) and body height (y) were established (male: y = 135.000+2.118 x3 and female: y = 120.790+2.808 x3). Both equations showed statistical significance (P < 0.05) with a 100% predictive accuracy. CT-VRT is an effective method for measurement of the index of sternum. The combined length of manubrium and mesosternum from CT-VRT can be used for body height estimation in southwest Han population.
Proton radius from electron scattering data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Proton radius from electron scattering data
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; ...
2016-05-31
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Beeler, Nicholas M.; Kilgore, Brian D.; McGarr, Arthur F.; Fletcher, Jon Peter B.; Evans, John R.; Steven R. Baker,
2012-01-01
We have conducted dynamic rupture propagation experiments to establish the relations between in-source stress drop, fracture energy and the resulting particle velocity during slip of an unconfined 2 m long laboratory fault at normal stresses between 4 and 8 MPa. To produce high fracture energy in the source we use a rough fault that has a large slip weakening distance. An artifact of the high fracture energy is that the nucleation zone is large such that precursory slip reduces fault strength over a large fraction of the total fault length prior to dynamic rupture, making the initial stress non-uniform. Shear stress, particle velocity, fault slip and acceleration were recorded coseismically at multiple locations along strike and at small fault-normal distances. Stress drop increases weakly with normal stress. Average slip rate depends linearly on the fault strength loss and on static stress drop, both with a nonzero intercept. A minimum fracture energy of 1.8 J/m2 and a linear slip weakening distance of 33 μm are inferred from the intercept. The large slip weakening distance also affects the average slip rate which is reduced by in-source energy dissipation from on-fault fracture energy.Because of the low normal stress and small per event slip (∼86 μm), no thermal weakening such as melting or pore fluid pressurization occurs in these experiments. Despite the relatively high fracture energy, and the very low heat production, energy partitioning during these laboratory earthquakes is very similar to typical earthquake source properties. The product of fracture energy and fault area is larger than the radiated energy. Seismic efficiency is low at ∼2%. The ratio of apparent stress to static stress drop is ∼27%, consistent with measured overshoot. The fracture efficiency is ∼33%. The static and dynamic stress drops when extrapolated to crustal stresses are 2–7.3 MPa and in the range of typical earthquake stress drops. As the relatively high fracture energy reduces the slip velocities in these experiments, the extrapolated average particle velocities for crustal stresses are 0.18–0.6 m/s. That these experiments are consistent with typical earthquake source properties suggests, albeit indirectly, that thermal weakening mechanisms such as thermal pressurization and melting which lead to near complete stress drops, dominate earthquake source properties only for exceptional events unless crustal stresses are low.
Engelkes, Vincent B; Beebe, Jeremy M; Frisbie, C Daniel
2004-11-03
Nanoscopic tunnel junctions were formed by contacting Au-, Pt-, or Ag-coated atomic force microscopy (AFM) tips to self-assembled monolayers (SAMs) of alkanethiol or alkanedithiol molecules on polycrystalline Au, Pt, or Ag substrates. Current-voltage traces exhibited sigmoidal behavior and an exponential attenuation with molecular length, characteristic of nonresonant tunneling. The length-dependent decay parameter, beta, was found to be approximately 1.1 per carbon atom (C(-1)) or 0.88 A(-)(1) and was independent of applied bias (over a voltage range of +/-1.5 V) and electrode work function. In contrast, the contact resistance, R(0), extrapolated from resistance versus molecular length plots showed a notable decrease with both applied bias and increasing electrode work function. The doubly bound alkanedithiol junctions were observed to have a contact resistance approximately 1 to 2 orders of magnitude lower than the singly bound alkanethiol junctions. However, both alkanethiol and dithiol junctions exhibited the same length dependence (beta value). The resistance versus length data were also used to calculate transmission values for each type of contact (e.g., Au-S-C, Au/CH(3), etc.) and the transmission per C-C bond (T(C)(-)()(C)).
The radiation environment of OSO missions from 1974 to 1978
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1973-01-01
Trapped particle radiation levels on several OSO missions were calculated for nominal trajectories using improved computational methods and new electron environment models. Temporal variations of the electron fluxes were considered and partially accounted for. Magnetic field calculations were performed with a current field model and extrapolated to a later epoch with linear time terms. Orbital flux integration results, which are presented in graphical and tabular form, are analyzed, explained, and discussed.
Error analysis regarding the calculation of nonlinear force-free field
NASA Astrophysics Data System (ADS)
Liu, S.; Zhang, H. Q.; Su, J. T.
2012-02-01
Magnetic field extrapolation is an alternative method to study chromospheric and coronal magnetic fields. In this paper, two semi-analytical solutions of force-free fields (Low and Lou in Astrophys. J. 352:343, 1990) have been used to study the errors of nonlinear force-free (NLFF) fields based on force-free factor α. Three NLFF fields are extrapolated by approximate vertical integration (AVI) Song et al. (Astrophys. J. 649:1084, 2006), boundary integral equation (BIE) Yan and Sakurai (Sol. Phys. 195:89, 2000) and optimization (Opt.) Wiegelmann (Sol. Phys. 219:87, 2004) methods. Compared with the first semi-analytical field, it is found that the mean values of absolute relative standard deviations (RSD) of α along field lines are about 0.96-1.19, 0.63-1.07 and 0.43-0.72 for AVI, BIE and Opt. fields, respectively. While for the second semi-analytical field, they are about 0.80-1.02, 0.67-1.34 and 0.33-0.55 for AVI, BIE and Opt. fields, respectively. As for the analytical field, the calculation error of <| RSD|> is about 0.1˜0.2. It is also found that RSD does not apparently depend on the length of field line. These provide the basic estimation on the deviation of extrapolated field obtained by proposed methods from the real force-free field.
Yang, Ruiqi; Wang, Fei; Zhang, Jialing; Zhu, Chonglei; Fan, Limei
2015-05-19
To establish the reference values of thalamus, caudate nucleus and lenticular nucleus diameters through fetal thalamic transverse section. A total of 265 fetuses at our hospital were randomly selected from November 2012 to August 2014. And the transverse and length diameters of thalamus, caudate nucleus and lenticular nucleus were measured. SPSS 19.0 statistical software was used to calculate the regression curve of fetal diameter changes and gestational weeks of pregnancy. P < 0.05 was considered as having statistical significance. The linear regression equation of fetal thalamic length diameter and gestational week was: Y = 0.051X+0.201, R = 0.876, linear regression equation of thalamic transverse diameter and fetal gestational week was: Y = 0.031X+0.229, R = 0.817, linear regression equation of fetal head of caudate nucleus length diameter and gestational age was: Y = 0.033X+0.101, R = 0.722, linear regression equation of fetal head of caudate nucleus transverse diameter and gestational week was: R = 0.025 - 0.046, R = 0.711, linear regression equation of fetal lentiform nucleus length diameter and gestational week was: Y = 0.046+0.229, R = 0.765, linear regression equation of fetal lentiform nucleus diameter and gestational week was: Y = 0.025 - 0.05, R = 0.772. Ultrasonic measurement of diameter of fetal thalamus caudate nucleus, and lenticular nucleus through thalamic transverse section is simple and convenient. And measurements increase with fetal gestational weeks and there is linear regression relationship between them.
Scaling Theory of Entanglement at the Many-Body Localization Transition.
Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C
2017-09-15
We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.
The Prediction of Length-of-day Variations Based on Gaussian Processes
NASA Astrophysics Data System (ADS)
Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.
2015-01-01
Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.
Kensche, Tobias; Tokunaga, Fuminori; Ikeda, Fumiyo; Goto, Eiji; Iwai, Kazuhiro; Dikic, Ivan
2012-01-01
Nuclear factor-κB (NF-κB) essential modulator (NEMO), a component of the inhibitor of κB kinase (IKK) complex, controls NF-κB signaling by binding to ubiquitin chains. Structural studies of NEMO provided a rationale for the specific binding between the UBAN (ubiquitin binding in ABIN and NEMO) domain of NEMO and linear (Met-1-linked) di-ubiquitin chains. Full-length NEMO can also interact with Lys-11-, Lys-48-, and Lys-63-linked ubiquitin chains of varying length in cells. Here, we show that purified full-length NEMO binds preferentially to linear ubiquitin chains in competition with lysine-linked ubiquitin chains of defined length, including long Lys-63-linked deca-ubiquitins. Linear di-ubiquitins were sufficient to activate both the IKK complex in vitro and to trigger maximal NF-κB activation in cells. In TNFα-stimulated cells, NEMO chimeras engineered to bind exclusively to Lys-63-linked ubiquitin chains mediated partial NF-κB activation compared with cells expressing NEMO that binds to linear ubiquitin chains. We propose that NEMO functions as a high affinity receptor for linear ubiquitin chains and a low affinity receptor for long lysine-linked ubiquitin chains. This phenomenon could explain quantitatively distinct NF-κB activation patterns in response to numerous cell stimuli. PMID:22605335
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, T
Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of {sup 125}I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x{sub 0}. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit tomore » the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x{sub 1}, x{sub 2}) is applied to the difference of charges measured at the two plate separations x{sub 1} and x{sub 2}. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x{sub 1} and x{sub 2}, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.« less
Equilibrium and Effective Climate Sensitivity
NASA Astrophysics Data System (ADS)
Rugenstein, M.; Bloch-Johnson, J.
2016-12-01
Atmosphere-ocean general circulation models, as well as the real world, take thousands of years to equilibrate to CO2 induced radiative perturbations. Equilibrium climate sensitivity - a fully equilibrated 2xCO2 perturbation - has been used for decades as a benchmark in model intercomparisons, as a test of our understanding of the climate system and paleo proxies, and to predict or project future climate change. Computational costs and limited time lead to the widespread practice of extrapolating equilibrium conditions from just a few decades of coupled simulations. The most common workaround is the "effective climate sensitivity" - defined through an extrapolation of a 150 year abrupt2xCO2 simulation, including the assumption of linear climate feedbacks. The definitions of effective and equilibrium climate sensitivity are often mixed up and used equivalently, and it is argued that "transient climate sensitivity" is the more relevant measure for predicting the next decades. We present an ongoing model intercomparison, the "LongRunMIP", to study century and millennia time scales of AOGCM equilibration and the linearity assumptions around feedback analysis. As a true ensemble of opportunity, there is no protocol and the only condition to participate is a coupled model simulation of any stabilizing scenario simulating more than 1000 years. Many of the submitted simulations took several years to conduct. As of July 2016 the contribution comprises 27 scenario simulations of 13 different models originating from 7 modeling centers, each between 1000 and 6000 years. To contribute, please contact the authors as soon as possible We present preliminary results, discussing differences between effective and equilibrium climate sensitivity, the usefulness of transient climate sensitivity, extrapolation methods, and the state of the coupled climate system close to equilibrium. Caption for the Figure below: Evolution of temperature anomaly and radiative imbalance of 22 simulations with 12 models (color indicates the model). 20 year moving average.
Length-dependent structural stability of linear monatomic Cu wires
NASA Astrophysics Data System (ADS)
Singh, Gurvinder; Kumar, Krishan; Singh, Baljinder; Moudgil, R. K.
2018-05-01
We present first-principle calculations based on density functional theory for the finite-length monatomic Cu atom linear wires. The structure and its stability with increasing wire length in terms of number of atoms (N) is determined. Interestingly, the bond length is found to exhibit an oscillatory structure (the so-called magic length phenomenon), with a qualitative change in oscillatory behavior as one moves from even N wire to odd N wire. The even N wires follow simple even-odd oscillations whereas odd N wires show a phase change at the half length of the wires. The stability of the wire structure, determined in terms of the wire formation energy, also contains even-odd oscillation as a function of wire length. However, the oscillations in formation energy reverse its phase after the wire length is increased beyond N=12. Our findings are seen to be qualitatively consistent with recent simulations for a similar class finite-length metal atom wires.
Screening in ionic systems: simulations for the Lebowitz length.
Kim, Young C; Luijten, Erik; Fisher, Michael E
2005-09-30
Simulations of the Lebowitz length, xiL (T, rho), are reported for the restricted primitive model hard-core (diameter a) 1:1 electrolyte for densities rho approximately < 4rho(c) and T(c) approximately < T approximately < 40T(c). Finite-size effects are elucidated for the charge fluctuations in various subdomains that serve to evaluate xiL. On extrapolation to the bulk limit for T approximately > 10T(c) the exact low-density expansions are seen to fail badly when rho > 1/10 rho(c) (with rho(c)a3 approximately = 0.08). At higher densities xiL rises above the Debye length, xiD proportional to square root(T/rho), by 10%-30% (up to rho approximately =1.3rho(c)); the variation is portrayed fairly well by the generalized Debye-Hückel theory. On approaching criticality at fixed rho or fixed T, xiL (T, rho) remains finite with xiL(c) approximately = 0.30a approximately = 1.3xiD(c) but displays a weak entropylike singularity.
Exact Solution of Mutator Model with Linear Fitness and Finite Genome Length
NASA Astrophysics Data System (ADS)
Saakian, David B.
2017-08-01
We considered the infinite population version of the mutator phenomenon in evolutionary dynamics, looking at the uni-directional mutations in the mutator-specific genes and linear selection. We solved exactly the model for the finite genome length case, looking at the quasispecies version of the phenomenon. We calculated the mutator probability both in the statics and dynamics. The exact solution is important for us because the mutator probability depends on the genome length in a highly non-trivial way.
Features in visual search combine linearly
Pramod, R. T.; Arun, S. P.
2014-01-01
Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328
The golden ratio of nasal width to nasal bone length.
Goynumer, G; Yayla, M; Durukan, B; Wetherilt, L
2011-01-01
To calculate the ratio of fetal nasal width over nasal bone length at 14-39 weeks' gestation in Caucasian women. Fetal nasal bone length and nasal width at 14-39 weeks' gestation were measured in 532 normal fetuses. The mean and standard deviations of fetal nasal bone length, nasal width and their ratio to one another were calculated in normal fetuses according to the gestational age to establish normal values. A positive and linear correlation was detected between the nasal bone length and the gestational week, as between the nasal width and the gestational week. No linear growth pattern was found between the gestational week and the ratio of nasal width to nasal bone length, nearly equal to phi, throughout gestation. The ratio of nasal width to nasal bone length, approximately equal to phi, can be calculated at 14-38 weeks' gestation. This might be useful in evaluating fetal abnormalities.
NASA Astrophysics Data System (ADS)
Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.
De Vore, Karl W; Fatahi, Nadia M; Sass, John E
2016-08-01
Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.
Disturbances of automatic gait control mechanisms in higher level gait disorder.
Danoudis, Mary; Ganesvaran, Ganga; Iansek, Robert
2016-07-01
The underlying mechanisms responsible for the gait changes in frontal gait disorder (FGD), a form of higher level gait disorders, are poorly understood. We investigated the relationship between stride length and cadence (SLCrel) in people with FGD (n=15) in comparison to healthy older adults (n=21) to improve our understanding of the changes to gait in FGD. Gait data was captured using an electronic walkway system as participants walked at five self-selected speed conditions: preferred, very slow, slow, fast and very fast. Linear regression was used to determine the strength of the relationship (R(2)), slope and intercept. In the FGD group 9 participants had a strong SLCrel (linear group) (R(2)>0.8) and 6 a weak relationship (R(2)<0.8) (nonlinear group). The linear FGD group did not differ to healthy control for slope (p>0.05) but did have a lower intercept (p<0.001). The linear FGD group modulated gait speed by adjusting stride length and cadence similar to controls whereas the nonlinear FGD participants adjusted stride length but not cadence similar to controls. The non-linear FGD group had greater disturbance to their gait, poorer postural control and greater fear of falling compared to the linear FGD group. Investigation of the SLCrel resulted in new insights into the underlying mechanisms responsible for the gait changes found in FGD. The findings suggest stride length regulation was disrupted in milder FGD but as the disorder worsened, cadence control also became disordered resulting in a break down in the relationship between stride length and cadence. Copyright © 2016 Elsevier B.V. All rights reserved.
The cost of colorectal cancer according to the TNM stage.
Mar, Javier; Errasti, Jose; Soto-Gordoa, Myriam; Mar-Barrutia, Gilen; Martinez-Llorente, José Miguel; Domínguez, Severina; García-Albás, Juan José; Arrospide, Arantzazu
2017-02-01
The aim of this study was to measure the cost of treatment of colorectal cancer in the Basque public health system according to the clinical stage. We retrospectively collected demographic data, clinical data and resource use of a sample of 529 patients. For stagesi toiii the initial and follow-up costs were measured. The calculation of cost for stageiv combined generalized linear models to relate the cost to the duration of follow-up based on parametric survival analysis. Unit costs were obtained from the analytical accounting system of the Basque Health Service. The sample included 110 patients with stagei, 171 with stageii, 158 with stageiii and 90 with stageiv colorectal cancer. The initial total cost per patient was 8,644€ for stagei, 12,675€ for stageii and 13,034€ for stageiii. The main component was hospitalization cost. Calculated by extrapolation for stageiv mean survival was 1.27years. Its average annual cost was 22,403€, and 24,509€ to death. The total annual cost for colorectal cancer extrapolated to the whole Spanish health system was 623.9million€. The economic burden of colorectal cancer is important and should be taken into account in decision-making. The combination of generalized linear models and survival analysis allows estimation of the cost of metastatic stage. Copyright © 2017 AEC. Publicado por Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu
2016-08-15
The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformablymore » registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.« less
NASA Astrophysics Data System (ADS)
Mittal, R.; Rao, P.; Kaur, P.
2018-01-01
Elemental evaluations in scanty powdered material have been made using energy dispersive X-ray fluorescence (EDXRF) measurements, for which formulations along with specific procedure for sample target preparation have been developed. Fractional amount evaluation involves an itinerary of steps; (i) collection of elemental characteristic X-ray counts in EDXRF spectra recorded with different weights of material, (ii) search for linearity between X-ray counts and material weights, (iii) calculation of elemental fractions from the linear fit, and (iv) again linear fitting of calculated fractions with sample weights and its extrapolation to zero weight. Thus, elemental fractions at zero weight are free from material self absorption effects for incident and emitted photons. The analytical procedure after its verification with known synthetic samples of macro-nutrients, potassium and calcium, was used for wheat plant/ soil samples obtained from a pot experiment.
A consistent two-mutation model of bone cancer for two data sets of radium-injected beagles.
Bijwaard, H; Brugmans, M J P; Leenhouts, H P
2002-09-01
A two-mutation carcinogenesis model has been applied to model osteosarcoma incidence in two data sets of beagles injected with 226Ra. Taking age-specific retention into account, the following results have been obtained: (1) a consistent and well-fitting solution for all age and dose groups, (2) mutation rates that are linearly dependent on dose rate, with an exponential decrease for the second mutation at high dose rates, (3) a linear-quadratic dose-effect relationship, which indicates that care should be taken when extrapolating linearly, (4) highest cumulative incidences for injection at young adult age, and highest risks for injection doses of a few kBq kg(-1) at these ages, and (5) when scaled appropriately, the beagle model compares fairly well with a description for radium dial painters, suggesting that a consistent model description of bone cancer induction in beagles and humans may be possible.
Ficaro, E P; Fessler, J A; Rogers, W L; Schwaiger, M
1994-04-01
This study compares the ability of 241Am and 99mTc to estimate 201Tl attenuation maps while minimizing the loss in the precision of the emission data. A triple-head SPECT system with either an 241Am or 99mTc line source opposite a fan-beam collimator was used to estimate attenuation maps of the thorax of an anthropomorphic phantom. Linear attenuation values at 75 keV for 201Tl were obtained by linear extrapolation of the measured values from 241Am and 99mTc. Lung and soft-tissue estimates from both isotopes showed excellent agreement to within 3% of the measured values for 201Tl. Linear extrapolation did not yield satisfactory estimates for bone from either 241Am (+11.7%) or 99mTc (-15.3%). Patient data were used to estimate the dependence of crosstalk on patient size. Contamination from 201Tl in the transmission window was 5-6 times greater for 241Am compared to 99mTc, while the contamination in the 201Tl data in the transmission-emission detector head (head 1) was 4-5 times greater for 99mTc compared to 241Am. No contamination was detected in the 201Tl emission data of heads 2 and 3 from 241Am, whereas the 99mTc produced a small crosstalk component giving a signal-to-crosstalk ratio near 20:1. Measurements with a fillable chest phantom estimated the mean error introduced into the data from the removal of the crosstalk. Based on the measured data, 241Am is a suitable transmission source for simultaneous transmission-emission tomography for 201Tl cardiac studies.
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.
Humphries, T D; Sheppard, D A; Buckley, C E
2015-06-30
For homoleptic 18-electron complex hydrides, an inverse linear correlation has been established between the T-deuterium bond length (T = Fe, Co, Ni) and the average electronegativity of the metal countercations. This relationship can be further employed towards aiding structural solutions and predicting physical properties of novel complex transition metal hydrides.
Magnetized cosmological perturbations in the post-recombination era
NASA Astrophysics Data System (ADS)
Vasileiou, Hera; Tsagas, Christos G.
2016-01-01
We study inhomogeneous magnetized cosmologies through the post-recombination era in the framework of Newtonian gravity and the ideal-magnetohydrodynamic limit. The non-linear kinematic and dynamic equations are derived and linearized around the Newtonian counterpart of the Einstein-de Sitter universe. This allows for a direct comparison with the earlier relativistic treatments of the issue. Focusing on the evolution of linear density perturbations, we provide new analytic solutions which include the effects of the magnetic pressure as well as those of the field's tension. We confirm that the pressure of field inhibits the growth of density distortions and can induce a purely magnetic Jeans length. On scales larger than the aforementioned characteristic length the inhomogeneities grow, though slower than in non-magnetized universes. Wavelengths smaller than the magnetic Jeans length typically oscillate with decreasing amplitude. We also identify a narrow range of scales, just below the Jeans length, where the perturbations exhibit a slower power-law decay. In all cases, the effect of the field is proportional to its strength and increases as we move to progressively smaller lengths.
Small angle x-ray scattering of chromatin. Radius and mass per unit length depend on linker length.
Williams, S P; Langmore, J P
1991-01-01
Analyses of low angle x-ray scattering from chromatin, isolated by identical procedures but from different species, indicate that fiber diameter and number of nucleosomes per unit length increase with the amount of nucleosome linker DNA. Experiments were conducted at physiological ionic strength to obtain parameters reflecting the structure most likely present in living cells. Guinier analyses were performed on scattering from solutions of soluble chromatin from Necturus maculosus erythrocytes (linker length 48 bp), chicken erythrocytes (linker length 64 bp), and Thyone briareus sperm (linker length 87 bp). The results were extrapolated to infinite dilution to eliminate interparticle contributions to the scattering. Cross-sectional radii of gyration were found to be 10.9 +/- 0.5, 12.1 +/- 0.4, and 15.9 +/- 0.5 nm for Necturus, chicken, and Thyone chromatin, respectively, which are consistent with fiber diameters of 30.8, 34.2, and 45.0 nm. Mass per unit lengths were found to be 6.9 +/- 0.5, 8.3 +/- 0.6, and 11.8 +/- 1.4 nucleosomes per 10 nm for Necturus, chicken, and Thyone chromatin, respectively. The geometrical consequences of the experimental mass per unit lengths and radii of gyration are consistent with a conserved interaction among nucleosomes. Cross-linking agents were found to have little effect on fiber external geometry, but significant effect on internal structure. The absolute values of fiber diameter and mass per unit length, and their dependencies upon linker length agree with the predictions of the double-helical crossed-linker model. A compilation of all published x-ray scattering data from the last decade indicates that the relationship between chromatin structure and linker length is consistent with data obtained by other investigators. Images FIGURE 1 PMID:2049522
Studies of superresolution range-Doppler imaging
NASA Astrophysics Data System (ADS)
Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing; Yin, Jun; She, Zhishun
1993-02-01
This paper presents three superresolution imaging methods, including the linear prediction data extrapolation DFT (LPDEDFT), the dynamic optimization linear least squares (DOLLS), and the Hopfield neural network nonlinear least squares (HNNNLS). Live data of a metalized scale model B-52 aircraft, mounted on a rotating platform in a microwave anechoic chamber, have in this way been processed, as has a flying Boeing-727 aircraft. The imaging results indicate that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle in imaging, or equal-quality images from smaller bandwidth and total rotation, angle may be obtained by these superresolution approaches. Moreover, these methods are compared in respect of their resolution capability and computational complexity.
Method and system for non-linear motion estimation
NASA Technical Reports Server (NTRS)
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
Injection of coal by screw feed
NASA Technical Reports Server (NTRS)
Fisher, R.
1977-01-01
The use of the screw feeder for injecting solids through a 20 to 30 psi barrier is common practice in the cement making industry. An analytical extrapolation of that design, accounting for pressure holding characteristics of a column of solids, shows that coal can be fed to zones at several hundred psi with minimal or no loss of gas. A series of curves showing the calculated pressure gradient through a moving column of solids is presented. Mean particle size, solids velocity, and column length are parameters. Further study of this system to evaluate practicality is recommended.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demanins, F.; Rado, V.; Vinci, F.
1963-04-01
The macroscopic absorption cross section, diffusion constant, diffusion cooling constant, transport mean free patu, extrapolated distance, diffusion length, and mean life for thermal neutrons were determined for Dowtherm A at 20 deg C, using a pulsed neutron source. The experimental assembly and data analysis method are described, and the results are compared with other determinations. (auth)
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development.
Burt, T; Yoshida, K; Lappin, G; Vuong, L; John, C; de Wildt, S N; Sugiyama, Y; Rowland, M
2016-04-01
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications and design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. All phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
Abu Bakar, S N; Aspalilah, A; AbdelNasser, I; Nurliza, A; Hairuliza, M J; Swarhib, M; Das, S; Mohd Nor, F
2017-01-01
Stature is one of the characteristics that could be used to identify human, besides age, sex and racial affiliation. This is useful when the body found is either dismembered, mutilated or even decomposed, and helps in narrowing down the missing person's identity. The main aim of the present study was to construct regression functions for stature estimation by using lower limb bones in the Malaysian population. The sample comprised 87 adult individuals (81 males, 6 females) aged between 20 to 79 years. The parameters such as thigh length, lower leg length, leg length, foot length, foot height and foot breadth were measured. They were measured by a ruler and measuring tape. Statistical analysis involved independent t-test to analyse the difference between lower limbs in male and female. The Pearson's correlation test was used to analyse correlations between lower limb parameters and stature, and the linear regressions were used to form equations. The paired t-test was used to compare between actual stature and estimated stature by using the equations formed. Using independent t-test, there was a significant difference (p< 0.05) in the measurement between males and females with regard to leg length, thigh length, lower leg length, foot length and foot breadth. The thigh length, leg length and foot length were observed to have strong correlations with stature with p= 0.75, p= 0.81 and p= 0.69, respectively. Linear regressions were formulated for stature estimation. Paired t-test showed no significant difference between actual stature and estimated stature. It is concluded that regression functions can be used to estimate stature to identify skeletal remains in the Malaysia population.
Near Wake Depletion of Non-Magnetized Bodies Immersed in Mesosonic Plasma Flow
NASA Technical Reports Server (NTRS)
Wright, K. H.; Stone, N. H.; Samir, U.; Sorensen, J.; Winningham, J. D.
1997-01-01
During the recent TSS-1R mission, measurements of ion depletion in the near wake were obtained at a downstream distance of two body radii from the satellite center. The ratio of satellite radius to Debye length is approximately 150. Similar measurements were also obtained at the same downstream location in the wake of the shuttle during the Spacelab 2 mission of August 1985. In the case of the shuttle, the ratio of body radius to Debye length is greater than 1000. The wake depletion observed in the these two cases, together with data obtained from previous ionospheric satellites and from applicable laboratory experiments involving small bodies, will be compared in order to determine the influence of body size on wake filling. Extrapolation of these results to the case of the moon in the solar wind will be noted.
The GMO Sumrule and the πNN Coupling Constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
The isovector GMO sumrule for forward πN scattering is critically evaluated using the precise π-p and π-d scattering lengths obtained recently from pionic atom measurements. The charged πNN coupling constant is then deduced with careful analysis of systematic and statistical sources of uncertainties. This determination gives directly from data gc2(GMO)/4π = 14.17±0.09 (statistic) ±0.17 (systematic) or fc2/ 4π=0.078(11). This value is half-way between that of indirect methods (phase-shift analyses) and the direct evaluation from from backward np differential scattering cross sections (extrapolation to pion pole). From the π-p and π-d scattering lengths our analysis leads also to accurate values for (1/2)(aπ-p+aπ-n) and (1/2) (aπ-p-aπ-n).
Hadron-Hadron Interactions from Nf=2 +1 +1 lattice QCD: Isospin-1 K K scattering length
NASA Astrophysics Data System (ADS)
Helmes, C.; Jost, C.; Knippschild, B.; Kostrzewa, B.; Liu, L.; Urbach, C.; Werner, M.; ETM Collaboration
2017-08-01
We present results for the interaction of two kaons at maximal isospin. The calculation is based on Nf=2 +1 +1 flavor gauge configurations generated by the European Twisted Mass Collaboration with pion masses ranging from about 230 MeV to 450 MeV at three values of the lattice spacing. The elastic scattering length a0I =1 is calculated at several values of the bare strange and light quark masses. We find MKa0=-0.385 (16 )stat(+0/-12)ms(+0/-5)ZP(4 )rf as the result of a combined extrapolation to the continuum and to the physical point, where the first error is statistical, and the three following are systematical. This translates to a0=-0.154 (6 )stat(-5+0)ms(-2+0)ZP(2 )rf fm .
A Rational Approach to Determine Minimum Strength Thresholds in Novel Structural Materials
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Bilen, Canan; Sterling, Jerry
2003-01-01
Design of safe and survivable structures requires the availability of guaranteed minimum strength thresholds for structural materials to enable a meaningful comparison of strength requirement and available strength. This paper develops a procedure for determining such a threshold with a desired degree of confidence, for structural materials with none or minimal industrial experience. The problem arose in attempting to use a new, highly weight-efficient structural load tendon material to achieve a lightweight super-pressure balloon. The developed procedure applies to lineal (one dimensional) structural elements. One important aspect of the formulation is that it extrapolates to expected probability distributions for long length specimen samples from some hypothesized probability distribution that has been obtained from a shorter length specimen sample. The use of the developed procedure is illustrated using both real and simulated data.
NASA Astrophysics Data System (ADS)
Svoboda, Aaron A.; Forbes, Jeffrey M.; Miyahara, Saburo
2005-11-01
A self-consistent global tidal climatology, useful for comparing and interpreting radar observations from different locations around the globe, is created from space-based Upper Atmosphere Research Satellite (UARS) horizontal wind measurements. The climatology created includes tidal structures for horizontal winds, temperature and relative density, and is constructed by fitting local (in latitude and height) UARS wind data at 95 km to a set of basis functions called Hough mode extensions (HMEs). These basis functions are numerically computed modifications to Hough modes and are globally self-consistent in wind, temperature, and density. We first demonstrate this self-consistency with a proxy data set from the Kyushu University General Circulation Model, and then use a linear weighted superposition of the HMEs obtained from monthly fits to the UARS data to extrapolate the global, multi-variable tidal structure. A brief explanation of the HMEs’ origin is provided as well as information about a public website that has been set up to make the full extrapolated data sets available.
Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz
2011-01-01
SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
NASA Astrophysics Data System (ADS)
Chicrala, André; Dallaqua, Renato Sergio; Antunes Vieira, Luis Eduardo; Dal Lago, Alisson; Rodríguez Gómez, Jenny Marcela; Palacios, Judith; Coelho Stekel, Tardelli Ronan; Rezende Costa, Joaquim Eduardo; da Silva Rockenbach, Marlos
2017-10-01
The behavior of Active Regions (ARs) is directly related to the occurrence of some remarkable phenomena in the Sun such as solar flares or coronal mass ejections (CME). In this sense, changes in the magnetic field of the region can be used to uncover other relevant features like the evolution of the ARs magnetic structure and the plasma flow related to it. In this work we describe the evolution of the magnetic structure of the active region AR NOAA12443 observed from 2015/10/30 to 2015/11/10, which may be associated with several X-ray flares of classes C and M. The analysis is based on observations of the solar surface and atmosphere provided by HMI and AIA instruments on board of the SDO spacecraft. In order to investigate the magnetic energy buildup and release of the ARs, we shall employ potential and linear force free extrapolations based on the solar surface magnetic field distribution and the photospheric velocity fields.
High-efficiency acceleration in the laser wakefield by a linearly increasing plasma density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Kegong; Wu, Yuchi; Zhu, Bin
The acceleration length and the peak energy of the electron beam are limited by the dephasing effect in the laser wakefield acceleration with uniform plasma density. Based on 2D-3V particle in cell simulations, the effects of a linearly increasing plasma density on the electron acceleration are investigated broadly. Comparing with the uniform plasma density, because of the prolongation of the acceleration length and the gradually increasing accelerating field due to the increasing plasma density, the electron beam energy is twice higher in moderate nonlinear wakefield regime. Because of the lower plasma density, the linearly increasing plasma density can also avoidmore » the dark current caused by additional injection. At the optimal acceleration length, the electron energy can be increased from 350 MeV (uniform) to 760 MeV (linearly increasing) with the energy spread of 1.8%, the beam duration is 5 fs and the beam waist is 1.25 μm. This linearly increasing plasma density distribution can be achieved by a capillary with special gas-filled structure, and is much more suitable for experiment.« less
Uematsu, Hironori; Yamashita, Kazuto; Kunisawa, Susumu; Fushimi, Kiyohide
2017-01-01
Objectives The nationwide impact of antimicrobial-resistant infections on healthcare facilities throughout Japan has yet to be examined. This study aimed to estimate the disease burden of methicillin-resistant Staphylococcus aureus (MRSA) infections in Japanese hospitals. Design Retrospective analysis of inpatients comparing outcomes between subjects with and without MRSA infection. Data source A nationwide administrative claims database. Setting 1133 acute care hospitals throughout Japan. Participants All surgical and non-surgical inpatients who were discharged between April 1, 2014 and March 31, 2015. Main outcome measures Disease burden was assessed using hospitalization costs, length of stay, and in-hospital mortality. Using a unique method of infection identification, we categorized patients into an anti-MRSA drug group and a control group based on anti-MRSA drug utilization. To estimate the burden of MRSA infections, we calculated the differences in outcome measures between these two groups. The estimates were extrapolated to all 1584 acute care hospitals in Japan that have adopted a prospective payment system. Results We categorized 93 838 patients into the anti-MRSA drug group and 2 181 827 patients into the control group. The mean hospitalization costs, length of stay, and in-hospital mortality of the anti-MRSA drug group were US$33 548, 75.7 days, and 22.9%, respectively; these values were 3.43, 2.95, and 3.66 times that of the control group, respectively. When extrapolated to the 1584 hospitals, the total incremental burden of MRSA was estimated to be US$2 billion (3.41% of total hospitalization costs), 4.34 million days (3.02% of total length of stay), and 14.3 thousand deaths (3.62% of total mortality). Conclusions This study quantified the approximate disease burden of MRSA infections in Japan. These findings can inform policymakers on the burden of antimicrobial-resistant infections and support the application of infection prevention programs. PMID:28654675
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Haixia; Li, Bo; Huang, Zhenghua
How the solar corona is heated to high temperatures remains an unsolved mystery in solar physics. In the present study we analyze observations of 50 whole active region loops taken with the Extreme-ultraviolet Imaging Spectrometer on board the Hinode satellite. Eleven loops were classified as cool loops (<1 MK) and 39 as warm loops (1–2 MK). We study their plasma parameters, such as densities, temperatures, filling factors, nonthermal velocities, and Doppler velocities. We combine spectroscopic analysis with linear force-free magnetic field extrapolation to derive the 3D structure and positioning of the loops, their lengths and heights, and the magnetic fieldmore » strength along the loops. We use density-sensitive line pairs from Fe xii, Fe xiii, Si x, and Mg vii ions to obtain electron densities by taking special care of intensity background subtraction. The emission measure loci method is used to obtain the loop temperatures. We find that the loops are nearly isothermal along the line of sight. Their filling factors are between 8% and 89%. We also compare the observed parameters with the theoretical Rosner–Tucker–Vaiana (RTV) scaling law. We find that most of the loops are in an overpressure state relative to the RTV predictions. In a follow-up study, we will report a heating model of a parallel-cascade-based mechanism and will compare the model parameters with the loop plasma and structural parameters derived here.« less
Johnston, I A; Salamonski, J
1984-07-01
Single white fibres and small bundles (two to three) of red fibres were isolated from the trunk muscle of Pacific Blue Marlin (50-121 kg body weight). Fibres were chemically skinned with 1% Brij. Maximum Ca2+-activated force production (Po) was 57 kN m-2 for red fibres and 176 kN m-2 for white fibres at 25 degrees C. The force-velocity (P-V) characteristics of these fibres were determined at 15 and 25 degrees C. Points below 0.6 Po on the P-V curve could be fitted to a linear form of Hill's equation. The degree of curvature of the P-V curve was similar at 15 and 25 degrees C (Hill's constant a/Po = 0.24 and 0.12 for red and white fibres respectively). Extrapolated maximum contraction velocities (Vmax) were 2.5 muscle lengths s-1 (Lo S-1) (red fibres) and 5.3 Lo S-1 (white fibres) at 25 degrees C. Q10(15-25 degrees C) values for Vmax were 1.4 and 1.3 for red and white fibres respectively. Maximum power output had a similar low temperature dependence and amounted to 13 W kg-1 for red and 57 W kg-1 for white muscle at 25 degrees C. The results are briefly discussed in relation to the locomotion and ecology of marlin.
Wetland losses related to fault movement and hydrocarbon production, southeastern Texas coast
White, William A.; Morton, Robert A.
1997-01-01
Time series analyses of surface fault activity and nearby hydrocarbon production from the southeastern Texas coast show a high correlation among volume of produced fluids, timing of fault activation, rates of subsidence, and rates of wetland loss. Greater subsidence on the downthrown sides of faults contributes to more frequent flooding and generally wetter conditions, which are commonly reflected by changes in plant communities {e.g., Spartina patens to Spartina alterniflora) or progressive transformation of emergent vegetation to open water. Since the 1930s and 1950s, approximately 5,000 hectares of marsh habitat has been lost as a result of subsidence associated with faulting. Marsh- es have expanded locally along faults where hydrophytic vegetation has spread into former upland areas. Fault traces are linear to curvilinear and are visible because elevation differences across faults alter soil hydrology and vegetation. Fault lengths range from 1 to 13.4 km and average 3.8 km. Seventy-five percent of the faults visible on recent aerial photographs are not visible on photographs taken in the 1930's, indicating relatively recent fault movement. At least 80% of the surface faults correlate with extrapolated subsurface faults; the correlation increases to more than 90% when certain assumptions are made to compensate for mismatches in direction of displacement. Coastal wetlands loss in Texas associated with hydrocarbon extraction will likely increase where production in mature fields is prolonged without fiuid reinjection.
Twist-writhe partitioning in a coarse-grained DNA minicircle model
NASA Astrophysics Data System (ADS)
Sayar, Mehmet; Avşaroǧlu, Barış; Kabakçıoǧlu, Alkan
2010-04-01
Here we present a systematic study of supercoil formation in DNA minicircles under varying linking number by using molecular-dynamics simulations of a two-bead coarse-grained model. Our model is designed with the purpose of simulating long chains without sacrificing the characteristic structural properties of the DNA molecule, such as its helicity, backbone directionality, and the presence of major and minor grooves. The model parameters are extracted directly from full-atomistic simulations of DNA oligomers via Boltzmann inversion; therefore, our results can be interpreted as an extrapolation of those simulations to presently inaccessible chain lengths and simulation times. Using this model, we measure the twist/writhe partitioning in DNA minicircles, in particular its dependence on the chain length and excess linking number. We observe an asymmetric supercoiling transition consistent with experiments. Our results suggest that the fraction of the linking number absorbed as twist and writhe is nontrivially dependent on chain length and excess linking number. Beyond the supercoiling transition, chains of the order of one persistence length carry equal amounts of twist and writhe. For longer chains, an increasing fraction of the linking number is absorbed by the writhe.
BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...
The purpose of this document is to provide guidance for the Agency on the application of the benchmark dose approach in determining the point of departure (POD) for health effects data, whether a linear or nonlinear low dose extrapolation is used. The guidance includes discussion on computation of benchmark doses and benchmark concentrations (BMDs and BMCs) and their lower confidence limits, data requirements, dose-response analysis, and reporting requirements. This guidance is based on today's knowledge and understanding, and on experience gained in using this approach.
The correlation of fractal structures in the photospheric and the coronal magnetic field
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Georgoulis, M.; Isliker, H.; Vlahos, L.; Anastasiadis, A.; Strintzi, D.; Moussas, X.
2009-10-01
Context: This work examines the relation between the fractal properties of the photospheric magnetic patterns and those of the coronal magnetic fields in solar active regions. Aims: We investigate whether there is any correlation between the fractal dimensions of the photospheric structures and the magnetic discontinuities formed in the corona. Methods: To investigate the connection between the photospheric and coronal complexity, we used a nonlinear force-free extrapolation method that reconstructs the 3d magnetic fields using 2d observed vector magnetograms as boundary conditions. We then located the magnetic discontinuities, which are considered as spatial proxies of reconnection-related instabilities. These discontinuities form well-defined volumes, called here unstable volumes. We calculated the fractal dimensions of these unstable volumes and compared them to the fractal dimensions of the boundary vector magnetograms. Results: Our results show no correlation between the fractal dimensions of the observed 2d photospheric structures and the extrapolated unstable volumes in the corona, when nonlinear force-free extrapolation is used. This result is independent of efforts to (1) bring the photospheric magnetic fields closer to a nonlinear force-free equilibrium and (2) omit the lower part of the modeled magnetic field volume that is almost completely filled by unstable volumes. A significant correlation between the fractal dimensions of the photospheric and coronal magnetic features is only observed at the zero level (lower limit) of approximation of a current-free (potential) magnetic field extrapolation. Conclusions: We conclude that the complicated transition from photospheric non-force-free fields to coronal force-free ones hampers any direct correlation between the fractal dimensions of the 2d photospheric patterns and their 3d counterparts in the corona at the nonlinear force-free limit, which can be considered as a second level of approximation in this study. Correspondingly, in the zero and first levels of approximation, namely, the potential and linear force-free extrapolation, respectively, we reveal a significant correlation between the fractal dimensions of the photospheric and coronal structures, which can be attributed to the lack of electric currents or to their purely field-aligned orientation.
Effect of active shortening on the rate of ATP utilisation by rabbit psoas muscle fibres
Sun, Y-B; Hilber, K; Irving, M
2001-01-01
The rate of ATP utilisation during active shortening of single skinned fibres from rabbit psoas muscle at 10 °C was measured using an NADH-linked assay. Fibres were immersed in silicone oil and illuminated with 365 nm light. The amounts of NADH and carboxytetramethylrhodamine (CTMR) in the illuminated region of the fibre were measured simultaneously from fluorescence emission at 425–475 and 570–650 nm, respectively. The ratio of these two signals was used to determine the intracellular concentration of NADH, and thus the ATP utilisation, without interference from movements of the fibre with respect to the measuring light beam. The total extra ATP utilisation due to shortening (ΔATP) was determined by extrapolation of the steady isometric rates before and after shortening to the mid-point of the shortening period. ΔATP had a roughly linear dependence on the extent of shortening in the range 1–15% fibre length (L0) at a shortening velocity of 0.4 L0 s−1 from initial sarcomere length 2.7 μm. For shortening of 1%L0, ΔATP was 21 ± 1 μm (mean ±s.e.m., n = 3). The mean rate of ATP utilisation during ramp shortening of 10%L0 had a roughly linear dependence on shortening velocity in the range 0.05–1.2 L0 s−1. During unloaded shortening at 1.2 L0 s−1 the mean rate of ATP utilisation was 1.7 mm s−1, about 9 times the isometric rate. ΔATP was roughly independent of shortening velocity, and was 84 ± 9 μm (mean ±s.e.m., n = 6) for shortening of 10%L0. The implications of these results for mechanical-chemical coupling in muscle are discussed. The total ATP utilisation associated with shortening of 1%L0 is only about 17% of the concentration of the myosin heads in the fibre, suggesting that during isometric contraction either less than 17% of the myosin heads are attached to actin, or that heads can detach without commitment to ATP splitting. The fraction of myosin heads attached to actin during unloaded shortening is estimated from the rate of ATP utilisation to be less than 7%. PMID:11251058
NASA Astrophysics Data System (ADS)
Koitz, Ralph; Soini, Thomas M.; Genest, Alexander; Trickey, S. B.; Rösch, Notker
2012-07-01
The performance of eight generalized gradient approximation exchange-correlation (xc) functionals is assessed by a series of scalar relativistic all-electron calculations on octahedral palladium model clusters Pdn with n = 13, 19, 38, 55, 79, 147 and the analogous clusters Aun (for n up through 79). For these model systems, we determined the cohesive energies and average bond lengths of the optimized octahedral structures. We extrapolate these values to the bulk limits and compare with the corresponding experimental values. While the well-established functionals BP, PBE, and PW91 are the most accurate at predicting energies, the more recent forms PBEsol, VMTsol, and VT{84}sol significantly improve the accuracy of geometries. The observed trends are largely similar for both Pd and Au. In the same spirit, we also studied the scalability of the ionization potentials and electron affinities of the Pd clusters, and extrapolated those quantities to estimates of the work function. Overall, the xc functionals can be classified into four distinct groups according to the accuracy of the computed parameters. These results allow a judicious selection of xc approximations for treating transition metal clusters.
Tunnel current across linear homocatenated germanium chains
NASA Astrophysics Data System (ADS)
Matsuura, Yukihito
2014-01-01
The electronic transport properties of germanium oligomers catenating into linear chains (linear Ge chains) have been theoretically studied using first principle methods. The conduction mechanism of a Ge chain sandwiched between gold electrodes was analyzed based on the density of states and the eigenstates of the molecule in a two-probe environment. Like that of silicon chains (Si chains), the highest occupied molecular orbital of Ge chains contains the extended σ-conjugation of Ge 4p orbitals at energy levels close to the Fermi level; this is in contrast to the electronic properties of linear carbon chains. Furthermore, the conductance of a Ge chain is expected to decrease exponentially with molecular length L. The decay constant β, which is defined as e-βL, of a Ge chain is similar to that of a Si chain, whereas the conductance of the Ge chains is higher than that of Si chains even though the Ge-Ge bond length is longer than the Si-Si bond length.
Sonographic Measurement of Fetal Ear Length in Turkish Women with a Normal Pregnancy
Özdemir, Mucize Eriç; Uzun, Işıl; Karahasanoğlu, Ayşe; Aygün, Mehmet; Akın, Hale; Yazıcıoğlu, Fehmi
2014-01-01
Background: Abnormal fetal ear length is a feature of chromosomal disorders. Fetal ear length measurement is a simple measurement that can be obtained during ultrasonographic examinations. Aims: To develop a nomogram for fetal ear length measurements in our population and investigate the correlation between fetal ear length, gestational age, and other standard fetal biometric measurements. Study Design: Cohort study. Methods: Ear lengths of the fetuses were measured in normal singleton pregnancies. The relationship between gestational age and fetal ear length in millimetres was analysed by simple linear regression. In addition, the correlation of fetal ear length measurements with biparietal diameter, head circumference, abdominal circumference, and femur length were evaluated.Ear length measurements were obtained from fetuses in 389 normal singleton pregnancies ranging between 16 and 28 weeks of gestation. Results: A nomogram was developed by linear regression analysis of the parameters ear length and gestational age. Fetal ear length (mm) = y = (1.348 X gestational age)−12.265), where gestational ages is in weeks. A high correlation was found between fetal ear length and gestational age, and a significant correlation was also found between fetal ear length and the biparietal diameter (r=0.962; p<0.001). Similar correlations were found between fetal ear length and head circumference, and fetal ear length and femur length. Conclusion: The results of this study provide a nomogram for fetal ear length. The study also demonstrates the relationship between ear length and other biometric measurements. PMID:25667783
Investigations on Two Co-C Fixed-Point Cells Prepared at INRIM and LNE-Cnam
NASA Astrophysics Data System (ADS)
Battuello, M.; Florio, M.; Sadli, M.; Bourson, F.
2011-08-01
INRIM and LNE-Cnam agreed to undertake a collaboration aimed to verify, through the use of metal-carbon eutectic fixed-point cells, methods and facilities used for defining the transition temperature of eutectic fixed points and manufacturing procedures of cells. To this purpose and as a first step of the cooperation, a Co-C cell manufactured at LNE-Cnam was measured at INRIM and compared with a local cell. The two cells were of different designs: the INRIM cell of 10 cm3 inner volume and the LNE-Cnam one of 3.9 cm3. The external dimensions of the two cells were noticeably different, namely, 40 mm in length and 24 mm in diameter for the LNE-Cnam cell 3Co4 and 110 mm in length and 42 mm in diameter for the INRIM cell. Consequently, the investigation of the effect of temperature distributions in the heating furnace was undertaken by implementing the cells inside single-zone and three-zone furnaces. The transition temperature of the cell was determined at the two institutes making use of different techniques: at INRIM radiation scales at 900 nm, 950 nm, and 1.6 μm were realized from In to Cu and then used to define T 90(Co-C) by extrapolation. At LNE-Cnam, a radiance comparator based on a grating monochromator was used for the extrapolation from the Cu fixed point. This paper presents a comparative description of the cells and the manufacturing methods and the results in terms of equivalence between the two cells and melting temperatures determined at INRIM and LNE-Cnam.
Is DNA a worm-like chain in Couette flow? In search of persistence length, a critical review.
Rittman, Martyn; Gilroy, Emma; Koohya, Hashem; Rodger, Alison; Richards, Adair
2009-01-01
Persistence length is the foremost measure of DNA flexibility. Its origins lie in polymer theory which was adapted for DNA following the determination of BDNA structure in 1953. There is no single definition of persistence length used, and the links between published definitions are based on assumptions which may, or may not be, clearly stated. DNA flexibility is affected by local ionic strength, solvent environment, bound ligands and intrinsic sequence-dependent flexibility. This article is a review of persistence length providing a mathematical treatment of the relationships between four definitions of persistence length, including: correlation, Kuhn length, bending, and curvature. Persistence length has been measured using various microscopy, force extension and solution methods such as linear dichroism and transient electric birefringence. For each experimental method a model of DNA is required to interpret the data. The importance of understanding the underlying models, along with the assumptions required by each definition to determine a value of persistence length, is highlighted for linear dichroism data, where it transpires that no model is currently available for long DNA or medium to high shear rate experiments.
Enumeration of Extended m-Regular Linear Stacks.
Guo, Qiang-Hui; Sun, Lisa H; Wang, Jian
2016-12-01
The contact map of a protein fold in the two-dimensional (2D) square lattice has arc length at least 3, and each internal vertex has degree at most 2, whereas the two terminal vertices have degree at most 3. Recently, Chen, Guo, Sun, and Wang studied the enumeration of [Formula: see text]-regular linear stacks, where each arc has length at least [Formula: see text] and the degree of each vertex is bounded by 2. Since the two terminal points in a protein fold in the 2D square lattice may form contacts with at most three adjacent lattice points, we are led to the study of extended [Formula: see text]-regular linear stacks, in which the degree of each terminal point is bounded by 3. This model is closed to real protein contact maps. Denote the generating functions of the [Formula: see text]-regular linear stacks and the extended [Formula: see text]-regular linear stacks by [Formula: see text] and [Formula: see text], respectively. We show that [Formula: see text] can be written as a rational function of [Formula: see text]. For a certain [Formula: see text], by eliminating [Formula: see text], we obtain an equation satisfied by [Formula: see text] and derive the asymptotic formula of the numbers of [Formula: see text]-regular linear stacks of length [Formula: see text].
NASA Astrophysics Data System (ADS)
Apdilah, D.; Harahap, M. K.; Khairina, N.; Husein, A. M.; Harahap, M.
2018-04-01
One Time Pad algorithm always requires a pairing of the key for plaintext. If the length of keys less than a length of the plaintext, the key will be repeated until the length of the plaintext same with the length of the key. In this research, we use Linear Congruential Generator and Quadratic Congruential Generator for generating a random number. One Time Pad use a random number as a key for encryption and decryption process. Key will generate the first letter from the plaintext, we compare these two algorithms in terms of time speed encryption, and the result is a combination of OTP with LCG faster than the combination of OTP with QCG.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
NASA Technical Reports Server (NTRS)
Hada, M.; George, Kerry; Cucinotta, Francis A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivors with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (1-20 cGy) of 170 MeV/u Si-28- ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving greater than 2 breaks in 2 or more chromosomes). The curves for doses above 10 cGy were fitted with linear or linear-quadratic functions. For Si-28- ions no dose response was observed in the 2-10 cGy dose range, suggesting a non-target effect in this range.
NASA Astrophysics Data System (ADS)
Bischoff, Jan-Moritz; Jeckelmann, Eric
2017-11-01
We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.
Effect of stride length on overarm throwing delivery: A linear momentum response.
Ramsey, Dan K; Crotin, Ryan L; White, Scott
2014-12-01
Changing stride length during overhand throwing delivery is thought to alter total body and throwing arm linear momentums, thereby altering the proportion of throwing arm momentum relative to the total body. Using a randomized cross-over design, nineteen pitchers (15 collegiate and 4 high school) were assigned to pitch two simulated 80-pitch games at ±25% of their desired stride length. An 8-camera motion capture system (240Hz) integrated with two force plates (960Hz) and radar gun tracked each throw. Segmental linear momentums in each plane of motion were summed yielding throwing arm and total body momentums, from which compensation ratio's (relative contribution between the two) were derived. Pairwise comparisons at hallmark events and phases identified significantly different linear momentum profiles, in particular, anteriorly directed total body, throwing arm, and momentum compensation ratios (P⩽.05) as a result of manipulating stride length. Pitchers with shorter strides generated lower forward (anterior) momentum before stride foot contact, whereas greater upward and lateral momentum (toward third base) were evident during the acceleration phase. The evidence suggests insufficient total body momentum in the intended throwing direction may potentially influence performance (velocity and accuracy) and perhaps precipitate throwing arm injuries. Copyright © 2014 Elsevier B.V. All rights reserved.
High-Precision Determination of the Pion-Nucleon σ Term from Roy-Steiner Equations.
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G
2015-08-28
We present a determination of the pion-nucleon (πN) σ term σ_{πN} based on the Cheng-Dashen low-energy theorem (LET), taking advantage of the recent high-precision data from pionic atoms to pin down the πN scattering lengths as well as of constraints from analyticity, unitarity, and crossing symmetry in the form of Roy-Steiner equations to perform the extrapolation to the Cheng-Dashen point in a reliable manner. With isospin-violating corrections included both in the scattering lengths and the LET, we obtain σ_{πN}=(59.1±1.9±3.0) MeV=(59.1±3.5) MeV, where the first error refers to uncertainties in the πN amplitude and the second to the LET. Consequences for the scalar nucleon couplings relevant for the direct detection of dark matter are discussed.
High-Precision Determination of the Pion-Nucleon σ Term from Roy-Steiner Equations
NASA Astrophysics Data System (ADS)
Hoferichter, Martin; Ruiz de Elvira, Jacobo; Kubis, Bastian; Meißner, Ulf-G.
2015-08-01
We present a determination of the pion-nucleon (π N ) σ term σπ N based on the Cheng-Dashen low-energy theorem (LET), taking advantage of the recent high-precision data from pionic atoms to pin down the π N scattering lengths as well as of constraints from analyticity, unitarity, and crossing symmetry in the form of Roy-Steiner equations to perform the extrapolation to the Cheng-Dashen point in a reliable manner. With isospin-violating corrections included both in the scattering lengths and the LET, we obtain σπ N=(59.1 ±1.9 ±3.0 ) MeV =(59.1 ±3.5 ) MeV , where the first error refers to uncertainties in the π N amplitude and the second to the LET. Consequences for the scalar nucleon couplings relevant for the direct detection of dark matter are discussed.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.
Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.
2015-01-01
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Berngard, Samuel Clark; Berngard, Jennifer Bishop; Krebs, Nancy F; Garcés, Ana; Miller, Leland V; Westcott, Jamie; Wright, Linda L; Kindem, Mark; Hambidge, K Michael
2013-12-01
Stunting is prevalent by the age of 6 months in the indigenous population of the Western Highlands of Guatemala. The objective of this study was to determine the time course and predictors of linear growth failure and weight-for-age in early infancy. One hundred and forty eight term newborns had measurements of length and weight in their homes, repeated at 3 and 6 months. Maternal measurements were also obtained. Mean ± SD length-for-age Z-score (LAZ) declined from newborn -1.0 ± 1.01 to -2.20 ± 1.05 and -2.26 ± 1.01 at 3 and 6 months respectively. Stunting rates for newborn, 3 and 6 months were 47%, 53% and 56% respectively. A multiple regression model (R(2) = 0.64) demonstrated that the major predictor of LAZ at 3 months was newborn LAZ with the other predictors being newborn weight-for-age Z-score (WAZ), gender and maternal education∗maternal age interaction. Because WAZ remained essentially constant and LAZ declined during the same period, weight-for-length Z-score (WLZ) increased from -0.44 to +1.28 from birth to 3 months. The more severe the linear growth failure, the greater WAZ was in proportion to the LAZ. The primary conclusion is that impaired fetal linear growth is the major predictor of early infant linear growth failure indicating that prevention needs to start with maternal interventions. © 2013.
Excess entropy scaling for the segmental and global dynamics of polyethylene melts.
Voyiatzis, Evangelos; Müller-Plathe, Florian; Böhm, Michael C
2014-11-28
The range of validity of the Rosenfeld and Dzugutov excess entropy scaling laws is analyzed for unentangled linear polyethylene chains. We consider two segmental dynamical quantities, i.e. the bond and the torsional relaxation times, and two global ones, i.e. the chain diffusion coefficient and the viscosity. The excess entropy is approximated by either a series expansion of the entropy in terms of the pair correlation function or by an equation of state for polymers developed in the context of the self associating fluid theory. For the whole range of temperatures and chain lengths considered, the two estimates of the excess entropy are linearly correlated. The scaled bond and torsional relaxation times fall into a master curve irrespective of the chain length and the employed scaling scheme. Both quantities depend non-linearly on the excess entropy. For a fixed chain length, the reduced diffusion coefficient and viscosity scale linearly with the excess entropy. An empirical reduction to a chain length-independent master curve is accessible for both dynamic quantities. The Dzugutov scheme predicts an increased value of the scaled diffusion coefficient with increasing chain length which contrasts physical expectations. The origin of this trend can be traced back to the density dependence of the scaling factors. This finding has not been observed previously for Lennard-Jones chain systems (Macromolecules, 2013, 46, 8710-8723). Thus, it limits the applicability of the Dzugutov approach to polymers. In connection with diffusion coefficients and viscosities, the Rosenfeld scaling law appears to be of higher quality than the Dzugutov approach. An empirical excess entropy scaling is also proposed which leads to a chain length-independent correlation. It is expected to be valid for polymers in the Rouse regime.
NASA Astrophysics Data System (ADS)
Kathpalia, B.; Tan, D.; Stern, I.; Erturk, A.
2018-01-01
It is well known that plucking-based frequency up-conversion can enhance the power output in piezoelectric energy harvesting by enabling cyclic free vibration at the fundamental bending mode of the harvester even for very low excitation frequencies. In this work, we present a geometrically nonlinear plucking-based framework for frequency up-conversion in piezoelectric energy harvesting under quasistatic excitations associated with low-frequency stimuli such as walking and similar rigid body motions. Axial shortening of the plectrum is essential to enable plucking excitation, which requires a nonlinear framework relating the plectrum parameters (e.g. overlap length between the plectrum and harvester) to the overall electrical power output. Von Kármán-type geometrically nonlinear deformation of the flexible plectrum cantilever is employed to relate the overlap length between the flexible (nonlinear) plectrum and the stiff (linear) harvester to the transverse quasistatic tip displacement of the plectrum, and thereby the tip load on the linear harvester in each plucking cycle. By combining the nonlinear plectrum mechanics and linear harvester dynamics with two-way electromechanical coupling, the electrical power output is obtained directly in terms of the overlap length. Experimental case studies and validations are presented for various overlap lengths and a set of electrical load resistance values. Further analysis results are reported regarding the combined effects of plectrum thickness and overlap length on the plucking force and harvested power output. The experimentally validated nonlinear plectrum-linear harvester framework proposed herein can be employed to design and optimize frequency up-conversion by properly choosing the plectrum parameters (geometry, material, overlap length, etc) as well as the harvester parameters.
NASA Astrophysics Data System (ADS)
Chaujar, Rishu; Kaur, Ravneet; Saxena, Manoj; Gupta, Mridula; Gupta, R. S.
2008-08-01
The distortion and linearity behaviour of MOSFETs is imperative for low-noise applications and RFICs design. In this paper, an extensive study on the RF-distortion and linearity behaviour of Laterally Amalgamated DUal Material GAte Concave (L-DUMGAC) MOSFET is performed and the influence of technology variations such as gate length, negative junction depth (NJD), substrate bias, drain bias and gate material workfunction is explored using ATLAS device simulator. Simulation results reveal that L-DUMGAC MOSFET significantly enhances the linearity and intermodulation distortion performance in terms of figure of merit (FOM) metrics: V, V, IIP3, IMD3 and higher order transconductance coefficients: gm1, gm2, gm3, proving its efficacy for RFIC design. The work, thus, optimize the device's bias point for RFICs with higher efficiency and better linearity performance.
Expansion in chickpea (Cicer arietinum L.) seed during soaking and cooking
NASA Astrophysics Data System (ADS)
Sayar, Sedat; Turhan, Mahir; Köksel, Hamit
2016-01-01
The linear and volumetric expansion of chickpea seeds during water absorption at 20, 30, 50, 70, 85 and 100°C was studied. Length, width and thickness of chickpea seeds linearly increased with the increase in moisture content at all temperatures studied, where the greatest increase was found in length. Two different mathematical approaches were used for the determination of the expansion coefficients. The plots of the both linear and volumetric expansion coefficients versus temperature exhibited two linear lines, the first one was through 20, 30 and 50ºC and the second one was trough 70, 85 and 100ºC. The crossing point (58ºC) of these lines was very close to the gelatinisation temperature (60ºC) of chickpea starch.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahl, W.K.
1997-03-01
The paper describes a study which attempted to extrapolate meaningful elastic-plastic fracture toughness data from flexure tests of a chemical vapor-infiltrated SiC/Nicalon fiber-reinforced ceramic matrix composite. Fibers in the fabricated composites were pre-coated with pyrolytic carbon to varying thicknesses. In the tests, crack length was not measured and the study employed an estimate procedure, previously used successfully for ductile metals, to derive J-R curve information. Results are presented in normalized load vs. normalized displacements and comparative J{sub Ic} behavior as a function of fiber precoating thickness.
Comparison of Linear Induction Motor Theories for the LIMRV and TLRV Motors
DOT National Transportation Integrated Search
1978-01-01
The Oberretl, Yamamura, and Mosebach theories of the linear induction motor are described and also applied to predict performance characteristics of the TLRV & LIMRV linear induction motors. The effect of finite motor width and length on performance ...
Garcia, Mariano; Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia-Gutierrez, Jorge; Balzter, Heiko
2017-02-01
Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR-derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR-based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (10 12 g), which translate into 12.06 ± 0.06 Tg CO2 e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars.
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
Burt, T.; Yoshida, K.; Lappin, G.; ...
2016-02-26
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
Microdosing and Other Phase 0 Clinical Trials: Facilitating Translation in Drug Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burt, T.; Yoshida, K.; Lappin, G.
A number of drivers and developments suggest that microdosing and other phase 0 applications will experience increased utilization in the near-to-medium future. Increasing costs of drug development and ethical concerns about the risks of exposing humans and animals to novel chemical entities are important drivers in favor of these approaches, and can be expected only to increase in their relevance. An increasing body of research supports the validity of extrapolation from the limited drug exposure of phase 0 approaches to the full, therapeutic exposure, with modeling and simulations capable of extrapolating even non-linear scenarios. An increasing number of applications andmore » design options demonstrate the versatility and flexibility these approaches offer to drug developers including the study of PK, bioavailability, DDI, and mechanistic PD effects. PET microdosing allows study of target localization, PK and receptor binding and occupancy, while Intra-Target Microdosing (ITM) allows study of local therapeutic-level acute PD coupled with systemic microdose-level exposure. Applications in vulnerable populations and extreme environments are attractive due to the unique risks of pharmacotherapy and increasing unmet healthcare needs. Lastly, all phase 0 approaches depend on the validity of extrapolation from the limited-exposure scenario to the full exposure of therapeutic intent, but in the final analysis the potential for controlled human data to reduce uncertainty about drug properties is bound to be a valuable addition to the drug development process.« less
Saatchi, Sassan; Casas, Angeles; Koltunov, Alexander; Ustin, Susan; Ramirez, Carlos; Garcia‐Gutierrez, Jorge; Balzter, Heiko
2017-01-01
Abstract Quantifying biomass consumption and carbon release is critical to understanding the role of fires in the carbon cycle and air quality. We present a methodology to estimate the biomass consumed and the carbon released by the California Rim fire by integrating postfire airborne LiDAR and multitemporal Landsat Operational Land Imager (OLI) imagery. First, a support vector regression (SVR) model was trained to estimate the aboveground biomass (AGB) from LiDAR‐derived metrics over the unburned area. The selected model estimated AGB with an R 2 of 0.82 and RMSE of 59.98 Mg/ha. Second, LiDAR‐based biomass estimates were extrapolated to the entire area before and after the fire, using Landsat OLI reflectance bands, Normalized Difference Infrared Index, and the elevation derived from LiDAR data. The extrapolation was performed using SVR models that resulted in R 2 of 0.73 and 0.79 and RMSE of 87.18 (Mg/ha) and 75.43 (Mg/ha) for the postfire and prefire images, respectively. After removing bias from the AGB extrapolations using a linear relationship between estimated and observed values, we estimated the biomass consumption from postfire LiDAR and prefire Landsat maps to be 6.58 ± 0.03 Tg (1012 g), which translate into 12.06 ± 0.06 Tg CO2e released to the atmosphere, equivalent to the annual emissions of 2.57 million cars. PMID:28405539
NASA Technical Reports Server (NTRS)
Clark, William S.; Hall, Kenneth C.
1994-01-01
A linearized Euler solver for calculating unsteady flows in turbomachinery blade rows due to both incident gusts and blade motion is presented. The model accounts for blade loading, blade geometry, shock motion, and wake motion. Assuming that the unsteadiness in the flow is small relative to the nonlinear mean solution, the unsteady Euler equations can be linearized about the mean flow. This yields a set of linear variable coefficient equations that describe the small amplitude harmonic motion of the fluid. These linear equations are then discretized on a computational grid and solved using standard numerical techniques. For transonic flows, however, one must use a linear discretization which is a conservative linearization of the non-linear discretized Euler equations to ensure that shock impulse loads are accurately captured. Other important features of this analysis include a continuously deforming grid which eliminates extrapolation errors and hence, increases accuracy, and a new numerically exact, nonreflecting far-field boundary condition treatment based on an eigenanalysis of the discretized equations. Computational results are presented which demonstrate the computational accuracy and efficiency of the method and demonstrate the effectiveness of the deforming grid, far-field nonreflecting boundary conditions, and shock capturing techniques. A comparison of the present unsteady flow predictions to other numerical, semi-analytical, and experimental methods shows excellent agreement. In addition, the linearized Euler method presented requires one or two orders-of-magnitude less computational time than traditional time marching techniques making the present method a viable design tool for aeroelastic analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bogen, K T
2007-01-30
As reflected in the 2005 USEPA Guidelines for Cancer Risk Assessment, some chemical carcinogens may have a site-specific mode of action (MOA) that is dual, involving mutation in addition to cell-killing induced hyperplasia. Although genotoxicity may contribute to increased risk at all doses, the Guidelines imply that for dual MOA (DMOA) carcinogens, judgment be used to compare and assess results obtained using separate ''linear'' (genotoxic) vs. ''nonlinear'' (nongenotoxic) approaches to low-level risk extrapolation. However, the Guidelines allow the latter approach to be used only when evidence is sufficient to parameterize a biologically based model that reliably extrapolates risk to lowmore » levels of concern. The Guidelines thus effectively prevent MOA uncertainty from being characterized and addressed when data are insufficient to parameterize such a model, but otherwise clearly support a DMOA. A bounding factor approach--similar to that used in reference dose procedures for classic toxicity endpoints--can address MOA uncertainty in a way that avoids explicit modeling of low-dose risk as a function of administered or internal dose. Even when a ''nonlinear'' toxicokinetic model cannot be fully validated, implications of DMOA uncertainty on low-dose risk may be bounded with reasonable confidence when target tumor types happen to be extremely rare. This concept was illustrated for the rodent carcinogen naphthalene. Bioassay data, supplemental toxicokinetic data, and related physiologically based pharmacokinetic and 2-stage stochastic carcinogenesis modeling results all clearly indicate that naphthalene is a DMOA carcinogen. Plausibility bounds on rat-tumor-type specific DMOA-related uncertainty were obtained using a 2-stage model adapted to reflect the empirical link between genotoxic and cytotoxic effects of the most potent identified genotoxic naphthalene metabolites, 1,2- and 1,4-naphthoquinone. Resulting bounds each provided the basis for a corresponding ''uncertainty'' factor <1 appropriate to apply to estimates of naphthalene risk obtained by linear extrapolation under a default genotoxic MOA assumption. This procedure is proposed as scientifically credible method to address MOA uncertainty for DMOA carcinogens.« less
The Linear Bicharacteristic Scheme for Computational Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Chan, Siew-Loong
2000-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on electromagnetic wave propagation problems. This paper extends the Linear Bicharacteristic Scheme for computational electromagnetics to treat lossy dielectric and magnetic materials and perfect electrical conductors. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media, and treatment of perfect electrical conductors (PECs) are shown to follow directly in the limit of high conductivity. Heterogeneous media are treated through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for one-dimensional model problems on both uniform and nonuniform grids, and the FDTD algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has approximately one-third the phase velocity error. The LBS is also more accurate on nonuniform grids.
A Two-Dimensional Linear Bicharacteristic Scheme for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.
2002-01-01
The upwind leapfrog or Linear Bicharacteristic Scheme (LBS) has previously been implemented and demonstrated on one-dimensional electromagnetic wave propagation problems. This memorandum extends the Linear Bicharacteristic Scheme for computational electromagnetics to model lossy dielectric and magnetic materials and perfect electrical conductors in two dimensions. This is accomplished by proper implementation of the LBS for homogeneous lossy dielectric and magnetic media and for perfect electrical conductors. Both the Transverse Electric and Transverse Magnetic polarizations are considered. Computational requirements and a Fourier analysis are also discussed. Heterogeneous media are modeled through implementation of surface boundary conditions and no special extrapolations or interpolations at dielectric material boundaries are required. Results are presented for two-dimensional model problems on uniform grids, and the Finite Difference Time Domain (FDTD) algorithm is chosen as a convenient reference algorithm for comparison. The results demonstrate that the two-dimensional explicit LBS is a dissipation-free, second-order accurate algorithm which uses a smaller stencil than the FDTD algorithm, yet it has less phase velocity error.
On the minimum quantum requirement of photosynthesis.
Zeinalov, Yuzeir
2009-01-01
An analysis of the shape of photosynthetic light curves is presented and the existence of the initial non-linear part is shown as a consequence of the operation of the non-cooperative (Kok's) mechanism of oxygen evolution or the effect of dark respiration. The effect of nonlinearity on the quantum efficiency (yield) and quantum requirement is reconsidered. The essential conclusions are: 1) The non-linearity of the light curves cannot be compensated using suspensions of algae or chloroplasts with high (>1.0) optical density or absorbance. 2) The values of the maxima of the quantum efficiency curves or the values of the minima of the quantum requirement curves cannot be used for estimation of the exact value of the maximum quantum efficiency and the minimum quantum requirement. The estimation of the maximum quantum efficiency or the minimum quantum requirement should be performed only after extrapolation of the linear part at higher light intensities of the quantum requirement curves to "0" light intensity.
1995 second modulator-klystron workshop: A modulator-klystron workshop for future linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
This second workshop examined the present state of modulator design and attempted an extrapolation for future electron-positron linear colliders. These colliders are currently viewed as multikilometer-long accelerators consisting of a thousand or more RF sources with 500 to 1,000, or more, pulsed power systems. The workshop opened with two introductory talks that presented the current approaches to designing these linear colliders, the anticipated RF sources, and the design constraints for pulse power. The cost of main AC power is a major economic consideration for a future collider, consequently the workshop investigated efficient modulator designs. Techniques that effectively apply the artmore » of power conversion, from the AC mains to the RF output, and specifically, designs that generate output pulses with very fast rise times as compared to the flattop. There were six sessions that involved one or more presentations based on problems specific to the design and production of thousands of modulator-klystron stations, followed by discussion and debate on the material.« less
NASA Technical Reports Server (NTRS)
Matthews, Clarence W
1953-01-01
An analysis is made of the effects of compressibility on the pressure coefficients about several bodies of revolution by comparing experimentally determined pressure coefficients with corresponding pressure coefficients calculated by the use of the linearized equations of compressible flow. The results show that the theoretical methods predict the subsonic pressure-coefficient changes over the central part of the body but do not predict the pressure-coefficient changes near the nose. Extrapolation of the linearized subsonic theory into the mixed subsonic-supersonic flow region fails to predict a rearward movement of the negative pressure-coefficient peak which occurs after the critical stream Mach number has been attained. Two equations developed from a consideration of the subsonic compressible flow about a prolate spheroid are shown to predict, approximately, the change with Mach number of the subsonic pressure coefficients for regular bodies of revolution of fineness ratio 6 or greater.
Philip M. Wargo
1978-01-01
Correlations of leaf area with length, width, and length times width of leaves of black oak, white oak, and sugar maple were determined to see if length and/or width could be used as accurate estimators of leaf area. The correlation of length times width with leaf area was high (r > + .95) for all three species. The linear equation Y = a + bX, where X = length times...
Theoretical studies of solar oscillations
NASA Technical Reports Server (NTRS)
Goldreich, P.
1980-01-01
Possible sources for the excitation of the solar 5 minute oscillations were investigated and a linear non-adiabatic stability code was applied to a preliminary study of the solar g-modes with periods near 160 minutes. Although no definitive conclusions concerning the excitation of these modes were reached, the excitation of the 5 minute oscillations by turbulent stresses in the convection zone remains a viable possibility. Theoretical calculations do not offer much support for the identification of the 160 minute global solar oscillation (reported by several independent observers) as a solar g-mode. A significant advance was made in attempting to reconcile mixing-length theory with the results of the calculations of linearly unstable normal modes. Calculations show that in a convective envelope prepared according to mixing length theory, the only linearly unstable modes are those which correspond to the turbulent eddies which are the basic element of the heuristic mixing length theory.
NASA Astrophysics Data System (ADS)
Samsonov, V. M.; Vasilyev, S. A.; Bembel, A. G.
2016-08-01
The generalized Thomson formula T m = T m (∞) (1-δ) R for the melting point of small objects T m has been analyzed from the viewpoint of the thermodynamic theory of similarity, where R is the radius of the particle and T m (∞) is the melting point of the corresponding large crystal. According to this formula, the parameter δ corresponds to the value of the radius of the T m ( R -1) particle obtained by the linear extrapolation of the dependence to the melting point of the particle equal to 0 K. It has been shown that δ = αδ0, where α is the factor of the asphericity of the particle (shape factor). In turn, the redefined characteristic length δ0 is expressed through the interphase tension σ sl at the boundary of the crystal with its own melt, the specific volume of the solid phase v s and the macroscopic value of the heat of fusion λ∞:δ0 = 2σ sl v s /λ∞. If we go from the reduced radius of the particle R/δ to the redefined reduced radius R/ r 1 or R/ d, where r 1 is the radius of the first coordination shell and d ≈ r 1 is the effective atomic diameter, then the simplex δ/ r 1 or δ/ d will play the role of the characteristic criterion of thermodynamic similarity. At a given value of α, this role will be played by the simplex Estimates of the parameters δ0 and δ0/ d have been carried out for ten metals with different lattice types. It has been shown that the values of the characteristic length δ0 are close to 1 nm and that the simplex δ0/ d is close to unity. In turn, the calculated values of the parameter δ agree on the order of magnitude with existing experimental data.
Pseudogap temperature T* of cuprate superconductors from the Nernst effect
NASA Astrophysics Data System (ADS)
Cyr-Choinière, O.; Daou, R.; Laliberté, F.; Collignon, C.; Badoux, S.; LeBoeuf, D.; Chang, J.; Ramshaw, B. J.; Bonn, D. A.; Hardy, W. N.; Liang, R.; Yan, J.-Q.; Cheng, J.-G.; Zhou, J.-S.; Goodenough, J. B.; Pyon, S.; Takayama, T.; Takagi, H.; Doiron-Leyraud, N.; Taillefer, Louis
2018-02-01
We use the Nernst effect to delineate the boundary of the pseudogap phase in the temperature-doping phase diagram of hole-doped cuprate superconductors. New data for the Nernst coefficient ν (T ) of YBa2Cu3Oy (YBCO), La1.8 -xEu0.2SrxCuO4 (Eu-LSCO), and La1.6 -xNd0.4SrxCuO4 (Nd-LSCO) are presented and compared with previously published data on YBCO, Eu-LSCO, Nd-LSCO, and La2 -xSrxCuO4 (LSCO). The temperature Tν at which ν /T deviates from its high-temperature linear behavior is found to coincide with the temperature at which the resistivity ρ (T ) deviates from its linear-T dependence, which we take as the definition of the pseudogap temperature T★—in agreement with the temperature at which the antinodal spectral gap detected in angle-resolved photoemission spectroscopy (ARPES) opens. We track T★ as a function of doping and find that it decreases linearly vs p in all four materials, having the same value in the three LSCO-based cuprates, irrespective of their different crystal structures. At low p ,T★ is higher than the onset temperature of the various orders observed in underdoped cuprates, suggesting that these orders are secondary instabilities of the pseudogap phase. A linear extrapolation of T★(p ) to p =0 yields T★(p →0 ) ≃TN (0), the Néel temperature for the onset of antiferromagnetic order at p =0 , suggesting that there is a link between pseudogap and antiferromagnetism. With increasing p ,T★(p ) extrapolates linearly to zero at p ≃pc 2 , the critical doping below which superconductivity emerges at high doping, suggesting that the conditions which favor pseudogap formation also favor pairing. We also use the Nernst effect to investigate how far superconducting fluctuations extend above the critical temperature Tc, as a function of doping, and find that a narrow fluctuation regime tracks Tc, and not T★. This confirms that the pseudogap phase is not a form of precursor superconductivity, and fluctuations in the phase of the superconducting order parameter are not what causes Tc to fall on the underdoped side of the Tc dome.
Dynamic Response of Multiphase Porous Media
1993-06-16
34"--OIct 5oct, tf1 2fOct, a f s,t,R Linearly Set Parameters Interpolate s = 1.03 from Model Fit s,t,R t = R = 0.0 Parameters Figure 3.3 Extrapolation...nitrogen. To expedite the testing, the system was equipped with solenoid operated valves so that the tests could be conducted by a single operator...incident bar. Figure 6.6 shows the incident bar entering the pressure vessel that contains the test specimen. The hose and valves are for filling and 6-5 I
Density functional Theory Based Generalized Effective Fragment Potential Method (Postprint)
2014-07-01
is acceptable for other applications) leads to induced dipole moments within 10−6 to 10−7 au of the precise values . Thus, the applied field of 10−4...noncovalent interactions. The water-benzene clusters17 and WATER2711 reference values were also ob- tained at the CCSD(T)/CBS level, except for the clusters...with n = 20,42 where MP2/CBS was used. The n-alkane dimers18 benchmark values were CCSD(T)/CBS for ethane to butane and a linear extrapolation method
Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals
NASA Astrophysics Data System (ADS)
Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling
2018-04-01
An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.
Recovery of compacted soils in Mojave Desert ghost towns.
Webb, R.H.; Steiger, J.W.; Wilshire, H.G.
1986-01-01
Residual compaction of soils was measured at seven sites in five Mojave Desert ghost towns. Soils in these Death Valley National Monument townsites were compacted by vehicles, animals, and human trampling, and the townsites had been completely abandoned and the buildings removed for 64 to 75 yr. Recovery times extrapolated using a linear recovery model ranged from 80 to 140 yr and averaged 100 yr. The recovery times were related to elevation, suggesting freeze-thaw loosening as an important factor in ameliorating soil compaction in the Mojave Desert. -from Authors
NASA Technical Reports Server (NTRS)
Cuddihy, Edward F. (Inventor); Willis, Paul B. (Inventor)
1989-01-01
A method of predicting aging of polymers operates by heating a polymer in the outdoors to an elevated temperature until a change of property is induced. The test is conducted at a plurality of temperatures to establish a linear Arrhenius plot which is extrapolated to predict the induction period for failure of the polymer at ambient temperature. An Outdoor Photo Thermal Aging Reactor (OPTAR) is also described including a heatable platen for receiving a sheet of polymer, means to heat the platen, and switching means such as a photoelectric switch for turning off the heater during dark periods.
NASA Technical Reports Server (NTRS)
Cuddihy, Edward F. (Inventor); Willis, Paul B. (Inventor)
1990-01-01
A method of predicting aging of polymers operates by heating a polymer in the outdoors to an elevated temperature until a change of property is induced. The test is conducted at a plurality of temperatures to establish a linear Arrhenius plot which is extrapolated to predict the induction period for failure of the polymer at ambient temperature. An Outdoor Photo Thermal Aging Reactor (OPTAR) is also described including a heatable platen for receiving a sheet of polymer, means to heat the platen and switching means such as a photoelectric switch for turning off the heater during dark periods.
Estimating the size of an open population using sparse capture-recapture data.
Huggins, Richard; Stoklosa, Jakub; Roach, Cameron; Yip, Paul
2018-03-01
Sparse capture-recapture data from open populations are difficult to analyze using currently available frequentist statistical methods. However, in closed capture-recapture experiments, the Chao sparse estimator (Chao, 1989, Biometrics 45, 427-438) may be used to estimate population sizes when there are few recaptures. Here, we extend the Chao (1989) closed population size estimator to the open population setting by using linear regression and extrapolation techniques. We conduct a small simulation study and apply the models to several sparse capture-recapture data sets. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Beaufort, Aurélien; Lamouroux, Nicolas; Pella, Hervé; Datry, Thibault; Sauquet, Eric
2018-05-01
Headwater streams represent a substantial proportion of river systems and many of them have intermittent flows due to their upstream position in the network. These intermittent rivers and ephemeral streams have recently seen a marked increase in interest, especially to assess the impact of drying on aquatic ecosystems. The objective of this paper is to quantify how discrete (in space and time) field observations of flow intermittence help to extrapolate over time the daily probability of drying (defined at the regional scale). Two empirical models based on linear or logistic regressions have been developed to predict the daily probability of intermittence at the regional scale across France. Explanatory variables were derived from available daily discharge and groundwater-level data of a dense gauging/piezometer network, and models were calibrated using discrete series of field observations of flow intermittence. The robustness of the models was tested using an independent, dense regional dataset of intermittence observations and observations of the year 2017 excluded from the calibration. The resulting models were used to extrapolate the daily regional probability of drying in France: (i) over the period 2011-2017 to identify the regions most affected by flow intermittence; (ii) over the period 1989-2017, using a reduced input dataset, to analyse temporal variability of flow intermittence at the national level. The two empirical regression models performed equally well between 2011 and 2017. The accuracy of predictions depended on the number of continuous gauging/piezometer stations and intermittence observations available to calibrate the regressions. Regions with the highest performance were located in sedimentary plains, where the monitoring network was dense and where the regional probability of drying was the highest. Conversely, the worst performances were obtained in mountainous regions. Finally, temporal projections (1989-2016) suggested the highest probabilities of intermittence (> 35 %) in 1989-1991, 2003 and 2005. A high density of intermittence observations improved the information provided by gauging stations and piezometers to extrapolate the temporal variability of intermittent rivers and ephemeral streams.
NASA Astrophysics Data System (ADS)
Rehner, Philipp; Gross, Joachim
2018-04-01
The curvature dependence of interfacial properties has been discussed extensively over the last decades. After Tolman published his work on the effect of droplet size on surface tension, where he introduced the interfacial property now known as Tolman length, several studies were performed with varying results. In recent years, however, some consensus has been reached about the sign and magnitude of the Tolman length of simple model fluids. In this work, we re-examine Tolman's equation and how it relates the Tolman length to the surface tension and we apply non-local classical density functional theory (DFT) based on the perturbed chain statistical associating fluid theory (PC-SAFT) to characterize the curvature dependence of the surface tension of real fluids as well as mixtures. In order to obtain a simple expression for the surface tension, we use a first-order expansion of the Tolman length as a function of droplet radius Rs, as δ(Rs) = δ0 + δ1/Rs, and subsequently expand Tolman's integral equation for the surface tension, whereby a second-order expansion is found to give excellent agreement with the DFT result. The radius-dependence of the surface tension of increasingly non-spherical substances is studied for n-alkanes, up to icosane. The infinite diameter Tolman length is approximately δ0 = -0.38 Å at low temperatures. For more strongly non-spherical substances and for temperatures approaching the critical point, however, the infinite diameter Tolman lengths δ0 turn positive. For mixtures, even if they contain similar molecules, the extrapolated Tolman length behaves strongly non-ideal, implying a qualitative change of the curvature behavior of the surface tension of the mixture.
Rehner, Philipp; Gross, Joachim
2018-04-28
The curvature dependence of interfacial properties has been discussed extensively over the last decades. After Tolman published his work on the effect of droplet size on surface tension, where he introduced the interfacial property now known as Tolman length, several studies were performed with varying results. In recent years, however, some consensus has been reached about the sign and magnitude of the Tolman length of simple model fluids. In this work, we re-examine Tolman's equation and how it relates the Tolman length to the surface tension and we apply non-local classical density functional theory (DFT) based on the perturbed chain statistical associating fluid theory (PC-SAFT) to characterize the curvature dependence of the surface tension of real fluids as well as mixtures. In order to obtain a simple expression for the surface tension, we use a first-order expansion of the Tolman length as a function of droplet radius R s , as δ(R s ) = δ 0 + δ 1 /R s , and subsequently expand Tolman's integral equation for the surface tension, whereby a second-order expansion is found to give excellent agreement with the DFT result. The radius-dependence of the surface tension of increasingly non-spherical substances is studied for n-alkanes, up to icosane. The infinite diameter Tolman length is approximately δ 0 = -0.38 Å at low temperatures. For more strongly non-spherical substances and for temperatures approaching the critical point, however, the infinite diameter Tolman lengths δ 0 turn positive. For mixtures, even if they contain similar molecules, the extrapolated Tolman length behaves strongly non-ideal, implying a qualitative change of the curvature behavior of the surface tension of the mixture.
Length-dependent thermal transport in one-dimensional self-assembly of planar π-conjugated molecules
NASA Astrophysics Data System (ADS)
Tang, Hao; Xiong, Yucheng; Zu, Fengshuo; Zhao, Yang; Wang, Xiaomeng; Fu, Qiang; Jie, Jiansheng; Yang, Juekuan; Xu, Dongyan
2016-06-01
This work reports a thermal transport study in quasi-one-dimensional organic nanostructures self-assembled from conjugated planar molecules via π-π interactions. Thermal resistances of single crystalline copper phthalocyanine (CuPc) and perylenetetracarboxylic diimide (PTCDI) nanoribbons are measured via a suspended thermal bridge method. We experimentally observed the deviation from the linear length dependence for the thermal resistance of single crystalline β-phase CuPc nanoribbons, indicating possible subdiffusion thermal transport. Interestingly, a gradual transition to the linear length dependence is observed with the increase of the lateral dimensions of CuPc nanoribbons. The measured thermal resistance of single crystalline CuPc nanoribbons shows an increasing trend with temperature. However, the trend of temperature dependence of thermal resistance is reversed after electron irradiation, i.e., decreasing with temperature, indicating that the single crystalline CuPc nanoribbons become `amorphous'. Similar behavior is also observed for PTCDI nanoribbons after electron irradiation, proving that the electron beam can induce amorphization of single crystalline self-assembled nanostructures of planar π-conjugated molecules. The measured thermal resistance of the `amorphous' CuPc nanoribbon demonstrates a roughly linear dependence on the nanoribbon length, suggesting that normal diffusion dominates thermal transport.This work reports a thermal transport study in quasi-one-dimensional organic nanostructures self-assembled from conjugated planar molecules via π-π interactions. Thermal resistances of single crystalline copper phthalocyanine (CuPc) and perylenetetracarboxylic diimide (PTCDI) nanoribbons are measured via a suspended thermal bridge method. We experimentally observed the deviation from the linear length dependence for the thermal resistance of single crystalline β-phase CuPc nanoribbons, indicating possible subdiffusion thermal transport. Interestingly, a gradual transition to the linear length dependence is observed with the increase of the lateral dimensions of CuPc nanoribbons. The measured thermal resistance of single crystalline CuPc nanoribbons shows an increasing trend with temperature. However, the trend of temperature dependence of thermal resistance is reversed after electron irradiation, i.e., decreasing with temperature, indicating that the single crystalline CuPc nanoribbons become `amorphous'. Similar behavior is also observed for PTCDI nanoribbons after electron irradiation, proving that the electron beam can induce amorphization of single crystalline self-assembled nanostructures of planar π-conjugated molecules. The measured thermal resistance of the `amorphous' CuPc nanoribbon demonstrates a roughly linear dependence on the nanoribbon length, suggesting that normal diffusion dominates thermal transport. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr09043a
AZTEC. Parallel Iterative method Software for Solving Linear Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, S.; Shadid, J.; Tuminaro, R.
1995-07-01
AZTEC is an interactive library that greatly simplifies the parrallelization process when solving the linear systems of equations Ax=b where A is a user supplied n X n sparse matrix, b is a user supplied vector of length n and x is a vector of length n to be computed. AZTEC is intended as a software tool for users who want to avoid cumbersome parallel programming details but who have large sparse linear systems which require an efficiently utilized parallel processing system. A collection of data transformation tools are provided that allow for easy creation of distributed sparse unstructured matricesmore » for parallel solutions.« less
Mathematical modelling of the growth of human fetus anatomical structures.
Dudek, Krzysztof; Kędzia, Wojciech; Kędzia, Emilia; Kędzia, Alicja; Derkowski, Wojciech
2017-09-01
The goal of this study was to present a procedure that would enable mathematical analysis of the increase of linear sizes of human anatomical structures, estimate mathematical model parameters and evaluate their adequacy. Section material consisted of 67 foetuses-rectus abdominis muscle and 75 foetuses- biceps femoris muscle. The following methods were incorporated to the study: preparation and anthropologic methods, image digital acquisition, Image J computer system measurements and statistical analysis method. We used an anthropologic method based on age determination with the use of crown-rump length-CRL (V-TUB) by Scammon and Calkins. The choice of mathematical function should be based on a real course of the curve presenting growth of anatomical structure linear size Ύ in subsequent weeks t of pregnancy. Size changes can be described with a segmental-linear model or one-function model with accuracy adequate enough for clinical purposes. The interdependence of size-age is described with many functions. However, the following functions are most often considered: linear, polynomial, spline, logarithmic, power, exponential, power-exponential, log-logistic I and II, Gompertz's I and II and von Bertalanffy's function. With the use of the procedures described above, mathematical models parameters were assessed for V-PL (the total length of body) and CRL body length increases, rectus abdominis total length h, its segments hI, hII, hIII, hIV, as well as biceps femoris length and width of long head (LHL and LHW) and of short head (SHL and SHW). The best adjustments to measurement results were observed in the exponential and Gompertz's models.
Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.
2016-01-01
In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/ startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.
Extending the Operational Envelope of a Turbofan Engine Simulation into the Sub-Idle Region
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes Walter; Hamley, Andrew J.; Guo, Ten-Huei; Litt, Jonathan S.
2016-01-01
In many non-linear gas turbine simulations, operation in the sub-idle region can lead to model instability. This paper lays out a method for extending the operational envelope of a map based gas turbine simulation to include the sub-idle region. This method develops a multi-simulation solution where the baseline component maps are extrapolated below the idle level and an alternate model is developed to serve as a safety net when the baseline model becomes unstable or unreliable. Sub-idle model development takes place in two distinct operational areas, windmilling/shutdown and purge/cranking/startup. These models are based on derived steady state operating points with transient values extrapolated between initial (known) and final (assumed) states. Model transitioning logic is developed to predict baseline model sub-idle instability, and transition smoothly and stably to the backup sub-idle model. Results from the simulation show a realistic approximation of sub-idle behavior as compared to generic sub-idle engine performance that allows the engine to operate continuously and stably from shutdown to full power.
Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G
1987-12-01
Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment.
Swenberg, J A; Richardson, F C; Boucheron, J A; Deal, F H; Belinsky, S A; Charbonneau, M; Short, B G
1987-01-01
Recent investigations on mechanism of carcinogenesis have demonstrated important quantitative relationships between the induction of neoplasia, the molecular dose of promutagenic DNA adducts and their efficiency for causing base-pair mismatch, and the extent of cell proliferation in target organ. These factors are involved in the multistage process of carcinogenesis, including initiation, promotion, and progression. The molecular dose of DNA adducts can exhibit supralinear, linear, or sublinear relationships to external dose due to differences in absorption, biotransformation, and DNA repair at high versus low doses. In contrast, increased cell proliferation is a common phenomena that is associated with exposures to relatively high doses of toxic chemicals. As such, it enhances the carcinogenic response at high doses, but has little effect at low doses. Since data on cell proliferation can be obtained for any exposure scenario and molecular dosimetry studies are beginning to emerge on selected chemical carcinogens, methods are needed so that these critical factors can be utilized in extrapolation from high to low doses and across species. The use of such information may provide a scientific basis for quantitative risk assessment. PMID:3447904
Goliomytis, Michael; Tsipouzian, Theofania; Hager-Theodorides, Ariadne L
2015-09-01
Pre-incubation egg storage is a necessity for the poultry industry. This study evaluated the effects of pre-incubation storage length of broiler eggs on hatchability, 1-day-old chick quality, subsequent performance, and immunocompetence. To this end, a total of 360 hatching eggs were stored for 4, 12, or 16 d prior to incubation. Hatchability and chick quality were assessed at hatch, and growth performance and immunocompetence parameters were assessed during a 35 d rearing period. Hatchability of set and fertile eggs, and embryonic mortality, were not affected by egg storage. On the contrary, 1-day-old chick BW and length were linearly negatively correlated with egg storage length (P-linear<0.05). Nevertheless, BW corrected for egg weight prior to setting was unaffected, and corrected chick length was positively affected by storage length. One-day-old chick Tona score, navel quality, and post-hatch growth performance (BW at 7 and 35 d, cumulative feed intake, and feed conversion ratio at 35 d) were unaffected by egg storage (P, P-linear>0.05). Lymphoid organ weights at 2 and 35 d, the titre of maternal anti-NDV antibodies, most of the thymocyte subpopulations defined by CD3, CD4, and CD8 cell surface expression in the thymus of 2-d-old chicks, cellular responses to the PHA skin test, humoral responses to primary SRBC, and NDV immunizations were also not influenced by length of storage (P, P-linear>0.05). On the contrary, the length of egg storage was found to negatively influence the abundance of CD3+CD4-CD8- thymocytes that represent the majority of γδ-T cells in the thymus of 2-day-old chicks, as well as the humoral response to booster NDV immunization of the birds. In brief, pre-incubation storage of broiler hatching eggs for up to 16 d did not affect most developmental and growth parameters investigated, except for BW and length at hatch. Egg storage was found to suppress some aspects of the immunocompetence of the birds, particularly aspects of acquired immunity. © 2015 Poultry Science Association Inc.
Effect of a Dielectric Overlay on a Linearly Tapered Slot Antenna Excited by a Coplanar Waveguide
NASA Technical Reports Server (NTRS)
Simons, Rainee N.; Lee, Richard Q.; Perl, Thomas D.; Silvestro, John
1993-01-01
The effect of a dielectric overlay on a linearly tapered slot antenna (LTSA) is studied. The LTSA under study has very wide bandwidth and excellent radiation patterns. A dielectric overlay improves the patterns and directivity of the antenna by increasing the electrical length and effective aperture of the antenna. A dielectric overlay can also be used to reduce the physical length of the antenna without compromising the pattern quality.
NASA Astrophysics Data System (ADS)
Malekzadeh Moghani, Mahdy; Khomami, Bamin
2016-01-01
Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ˜ cs-0.5 as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.
Malekzadeh Moghani, Mahdy; Khomami, Bamin
2016-01-14
Macromolecules with ionizable groups are ubiquitous in biological and synthetic systems. Due to the complex interaction between chain and electrostatic decorrelation lengths, both equilibrium properties and micro-mechanical response of dilute solutions of polyelectrolytes (PEs) are more complex than their neutral counterparts. In this work, the bead-rod micromechanical description of a chain is used to perform hi-fidelity Brownian dynamics simulation of dilute PE solutions to ascertain the self-similar equilibrium behavior of PE chains with various linear charge densities, scaling of the Kuhn step length (lE) with salt concentration cs and the force-extension behavior of the PE chain. In accord with earlier theoretical predictions, our results indicate that for a chain with n Kuhn segments, lE ∼ cs (-0.5) as linear charge density approaches 1/n. Moreover, the constant force ensemble simulation results accurately predict the initial non-linear force-extension region of PE chain recently measured via single chain experiments. Finally, inspired by Cohen's extraction of Warner's force law from the inverse Langevin force law, a novel numerical scheme is developed to extract a new elastic force law for real chains from our discrete set of force-extension data similar to Padè expansion, which accurately depicts the initial non-linear region where the total Kuhn length is less than the thermal screening length.
Halsey, Lewis G
2013-06-01
The slope of the typically linear relationship between metabolic rate and walking speed represents the net cost of transport (NCOT). The extrapolated y-intercept is often greater than resting metabolic rate, thus representing a fixed cost associated with pedestrian transport including body maintenance costs. The full cause of the elevated y-intercept remains elusive and it could simply represent experimental stresses. The present literature-based study compares the mass-independent energetic cost of pedestrian locomotion in birds (excluding those with an upright posture, i.e. penguins), represented by the y-intercept, to a known predictor of cost of transport, hip height. Both phylogenetically informed and non-phylogenetically informed analyses were undertaken to determine if patterns of association between hip height, body mass, and the y-intercept are robust with respect to the method of analysis. Body mass and hip height were significant predictors of the y-intercept in the best phylogenetically-informed and non-phylogenetically informed models. Thus there is evidence that, in birds at least, the elevated y-intercept is a legitimate component of locomotion energy expenditure. Hip height is probably a good proxy of effective limb length and thus perhaps birds with greater hip heights have lower y-intercepts because their longer legs more efficiently accommodate body motion and/or because their limbs are more aligned with the ground reaction forces. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Everett, Samantha
2010-10-01
A transmission curve experiment was carried out to measure the range of beta particles in aluminum in the health physics laboratory located on the campus of Texas Southern University. The transmission count rate through aluminum for varying radiation lengths was measured using beta particles emitted from a low activity (˜1 μCi) Sr-90 source. The count rate intensity was recorded using a Geiger Mueller tube (SGC N210/BNC) with an active volume of 61 cm^3 within a systematic detection accuracy of a few percent. We compared these data with a realistic simulation of the experimental setup using the Geant4 Monte Carlo toolkit (version 9.3). The purpose of this study was to benchmark our Monte Carlo for future experiments as part of a more comprehensive research program. Transmission curves were simulated based on the standard and low-energy electromagnetic physics models, and using the radioactive decay module for the electrons primary energy distribution. To ensure the validity of our measurements, linear extrapolation techniques were employed to determine the in-medium beta particle range from the measured data and was found to be 1.87 g/cm^2 (˜0.693 cm), in agreement with literature values. We found that the general shape of the measured data and simulated curves were comparable; however, a discrepancy in the relative count rates was observed. The origin of this disagreement is still under investigation.
A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques
NASA Astrophysics Data System (ADS)
Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.
Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.
Cohen-Krausz, Sara; Cabahug, Pamela C; Trachtenberg, Shlomo
2011-07-08
Spiroplasmas belong to the class Mollicutes, representing the minimal, free-living, and self-replicating forms of life. Spiroplasmas are helical wall-less bacteria and the only ones known to swim by means of a linear motor (rather than the near-universal rotary bacterial motor). The linear motor follows the shortest path along the cell's helical membranal tube. The motor is composed of a flat monolayered ribbon of seven parallel fibrils and is believed to function in controlling cell helicity and motility through dynamic, coordinated, differential length changes in the fibrils. The latter cause local perturbations of helical symmetry, which are essential for net directional displacement in environments with a low Reynolds number. The underlying fibrils' core building block is a circular tetramer of the 59-kDa protein Fib. The fibrils' differential length changes are believed to be driven by molecular switching of Fib, leading consequently to axial ratio and length changes in tetrameric rings. Using cryo electron microscopy, diffractometry, single-particle analysis of isolated ribbons, and sequence analyses of Fib, we determined the overall molecular organization of the Fib monomer, tetramer, fibril, and linear motor of Spiroplasma melliferum BC3 that underlies cell geometry and motility. Fib appears to be a bidomained molecule, of which the N-terminal half is apparently a globular phosphorylase. By a combination of reversible rotation and diagonal shift of Fib monomers, the tetramer adopts either a cross-like nonhanded conformation or a ring-like handed conformation. The sense of Fib rotation may determine the handedness of the linear motor and, eventually, of the cell. A further change in the axial ratio of the ring-like tetramers controls fibril lengths and the consequent helical geometry. Analysis of tetramer quadrants from adjacent fibrils clearly demonstrates local differential fibril lengths. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiegelmann, T.; Solanki, S. K.; Barthol, P.
Magneto-static models may overcome some of the issues facing force-free magnetic field extrapolations. So far they have seen limited use and have faced problems when applied to quiet-Sun data. Here we present a first application to an active region. We use solar vector magnetic field measurements gathered by the IMaX polarimeter during the flight of the Sunrise balloon-borne solar observatory in 2013 June as boundary conditions for a magneto-static model of the higher solar atmosphere above an active region. The IMaX data are embedded in active region vector magnetograms observed with SDO /HMI. This work continues our magneto-static extrapolation approach,more » which was applied earlier to a quiet-Sun region observed with Sunrise I. In an active region the signal-to-noise-ratio in the measured Stokes parameters is considerably higher than in the quiet-Sun and consequently the IMaX measurements of the horizontal photospheric magnetic field allow us to specify the free parameters of the model in a special class of linear magneto-static equilibria. The high spatial resolution of IMaX (110–130 km, pixel size 40 km) enables us to model the non-force-free layer between the photosphere and the mid-chromosphere vertically by about 50 grid points. In our approach we can incorporate some aspects of the mixed beta layer of photosphere and chromosphere, e.g., taking a finite Lorentz force into account, which was not possible with lower-resolution photospheric measurements in the past. The linear model does not, however, permit us to model intrinsic nonlinear structures like strongly localized electric currents.« less
The Electrostatic Instability for Realistic Pair Distributions in Blazar/EBL Cascades
NASA Astrophysics Data System (ADS)
Vafin, S.; Rafighi, I.; Pohl, M.; Niemiec, J.
2018-04-01
This work revisits the electrostatic instability for blazar-induced pair beams propagating through the intergalactic medium (IGM) using linear analysis and PIC simulations. We study the impact of the realistic distribution function of pairs resulting from the interaction of high-energy gamma-rays with the extragalactic background light. We present analytical and numerical calculations of the linear growth rate of the instability for the arbitrary orientation of wave vectors. Our results explicitly demonstrate that the finite angular spread of the beam dramatically affects the growth rate of the waves, leading to the fastest growth for wave vectors quasi-parallel to the beam direction and a growth rate at oblique directions that is only a factor of 2–4 smaller compared to the maximum. To study the nonlinear beam relaxation, we performed PIC simulations that take into account a realistic wide-energy distribution of beam particles. The parameters of the simulated beam-plasma system provide an adequate physical picture that can be extrapolated to realistic blazar-induced pairs. In our simulations, the beam looses only 1% of its energy, and we analytically estimate that the beam would lose its total energy over about 100 simulation times. An analytical scaling is then used to extrapolate the parameters of realistic blazar-induced pair beams. We find that they can dissipate their energy slightly faster by the electrostatic instability than through inverse-Compton scattering. The uncertainties arising from, e.g., details of the primary gamma-ray spectrum are too large to make firm statements for individual blazars, and an analysis based on their specific properties is required.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
NASA Astrophysics Data System (ADS)
Burant, Alex; Antonacci, Michael; McCallister, Drew; Zhang, Le; Branca, Rosa Tamara
2018-06-01
SuperParamagnetic Iron Oxide Nanoparticles (SPIONs) are often used in magnetic resonance imaging experiments to enhance Magnetic Resonance (MR) sensitivity and specificity. While the effect of SPIONs on the longitudinal and transverse relaxation time of 1H spins has been well characterized, their effect on highly diffusive spins, like those of hyperpolarized gases, has not. For spins diffusing in linear magnetic field gradients, the behavior of the magnetization is characterized by the relative size of three length scales: the diffusion length, the structural length, and the dephasing length. However, for spins diffusing in non-linear gradients, such as those generated by iron oxide nanoparticles, that is no longer the case, particularly if the diffusing spins experience the non-linearity of the gradient. To this end, 3D Monte Carlo simulations are used to simulate the signal decay and the resulting image contrast of hyperpolarized xenon gas near SPIONs. These simulations reveal that signal loss near SPIONs is dominated by transverse relaxation, with little contribution from T1 relaxation, while simulated image contrast and experiments show that diffusion provides no appreciable sensitivity enhancement to SPIONs.
Statistical properties of the radiation from SASE FEL operating in the linear regime
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents comprehensive analysis of statistical properties of the radiation from self amplified spontaneous emission (SASE) free electron laser operating in linear mode. The investigation has been performed in a one-dimensional approximation, assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied: field correlations, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and photoelectric counting statistics of SASE FEL radiation. It is shown that the radiation from SASE FEL operating in linear regime possesses all the features corresponding to completely chaotic polarized radiation.
NASA Astrophysics Data System (ADS)
Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria
2018-03-01
Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.
2011-04-01
this limitation the length of the windows needs to be shortened. It is also leads to a narrower confidence interval, see Figure 2.9. 82 The " big ...least one event will occur within the window. The windows are then grouped in sets of two and the process is reapeated for a window size twice as big ...0 505 T. Fu 1 506 D. Walden 1 508 J. Brown 1 55 T.Applebee 0 55 M. Dipper 1 551 T. Smith I 551 C. Bassler 3 3 551 V. Belenky 1 551 W. Belknap
NASA Astrophysics Data System (ADS)
Ono, Hiroshi; Kawatsuki, Nobuhiro
1995-03-01
The relationship between the saponification rate of poly(vinyl alcohol) (PVA), and the electrooptical properties and morphology of the PVA/liquid crystal (LC) composite films was investigated. Light transmission clazing and the LC droplet size were varied by changing the saponification rate or the blend ratio of two kinds of PVA with different saponification rates because the refractive index and surface tension could be controlled by the saponification rate of PVA. The threshold voltage decreased with increasing saponification rate though the extrapolation length was decreased. It was suggested that the electrooptical properties were strongly dependent on the droplet size.
Low-energy pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Ai, Li; Kaufmann, W. B.
1998-02-01
An analysis of low-energy charged pion-nucleon data from recent π+/-p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f2=0.0756+/-0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P31 and P13 partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the Σ term. Off-shell amplitudes are also provided.
Nonlinear dynamics support a linear population code in a retinal target-tracking circuit.
Leonardo, Anthony; Meister, Markus
2013-10-23
A basic task faced by the visual system of many organisms is to accurately track the position of moving prey. The retina is the first stage in the processing of such stimuli; the nature of the transformation here, from photons to spike trains, constrains not only the ultimate fidelity of the tracking signal but also the ease with which it can be extracted by other brain regions. Here we demonstrate that a population of fast-OFF ganglion cells in the salamander retina, whose dynamics are governed by a nonlinear circuit, serve to compute the future position of the target over hundreds of milliseconds. The extrapolated position of the target is not found by stimulus reconstruction but is instead computed by a weighted sum of ganglion cell outputs, the population vector average (PVA). The magnitude of PVA extrapolation varies systematically with target size, speed, and acceleration, such that large targets are tracked most accurately at high speeds, and small targets at low speeds, just as is seen in the motion of real prey. Tracking precision reaches the resolution of single photoreceptors, and the PVA algorithm performs more robustly than several alternative algorithms. If the salamander brain uses the fast-OFF cell circuit for target extrapolation as we suggest, the circuit dynamics should leave a microstructure on the behavior that may be measured in future experiments. Our analysis highlights the utility of simple computations that, while not globally optimal, are efficiently implemented and have close to optimal performance over a limited but ethologically relevant range of stimuli.
Zhou, Qing-he; Xiao, Wang-pin; Shen, Ying-yan
2014-07-01
The spread of spinal anesthesia is highly unpredictable. In patients with increased abdominal girth and short stature, a greater cephalad spread after a fixed amount of subarachnoidally administered plain bupivacaine is often observed. We hypothesized that there is a strong correlation between abdominal girth/vertebral column length and cephalad spread. Age, weight, height, body mass index, abdominal girth, and vertebral column length were recorded for 114 patients. The L3-L4 interspace was entered, and 3 mL of 0.5% plain bupivacaine was injected into the subarachnoid space. The cephalad spread (loss of temperature sensation and loss of pinprick discrimination) was assessed 30 minutes after intrathecal injection. Linear regression analysis was performed for age, weight, height, body mass index, abdominal girth, vertebral column length, and the spread of spinal anesthesia, and the combined linear contribution of age up to 55 years, weight, height, abdominal girth, and vertebral column length was tested by multiple regression analysis. Linear regression analysis showed that there was a significant univariate correlation among all 6 patient characteristics evaluated and the spread of spinal anesthesia (all P < 0.039) except for age and loss of temperature sensation (P > 0.068). Multiple regression analysis showed that abdominal girth and the vertebral column length were the key determinants for spinal anesthesia spread (both P < 0.0001), whereas age, weight, and height could be omitted without changing the results (all P > 0.059, all 95% confidence limits < 0.372). Multiple regression analysis revealed that the combination of a patient's 5 general characteristics, especially abdominal girth and vertebral column length, had a high predictive value for the spread of spinal anesthesia after a given dose of plain bupivacaine.
Long-Period Tidal Variations in the Length of Day
NASA Technical Reports Server (NTRS)
Ray, Richard D.; Erofeeva, Svetlana Y.
2014-01-01
A new model of long-period tidal variations in length of day is developed. The model comprises 80 spectral lines with periods between 18.6 years and 4.7 days, and it consistently includes effects of mantle anelasticity and dynamic ocean tides for all lines. The anelastic properties followWahr and Bergen; experimental confirmation for their results now exists at the fortnightly period, but there remains uncertainty when extrapolating to the longest periods. The ocean modeling builds on recent work with the fortnightly constituent, which suggests that oceanic tidal angular momentum can be reliably predicted at these periods without data assimilation. This is a critical property when modeling most long-period tides, for which little observational data exist. Dynamic ocean effects are quite pronounced at shortest periods as out-of-phase rotation components become nearly as large as in-phase components. The model is tested against a 20 year time series of space geodetic measurements of length of day. The current international standard model is shown to leave significant residual tidal energy, and the new model is found to mostly eliminate that energy, with especially large variance reduction for constituents Sa, Ssa, Mf, and Mt.
Turbulence closure for mixing length theories
NASA Astrophysics Data System (ADS)
Jermyn, Adam S.; Lesaffre, Pierre; Tout, Christopher A.; Chitre, Shashikumar M.
2018-05-01
We present an approach to turbulence closure based on mixing length theory with three-dimensional fluctuations against a two-dimensional background. This model is intended to be rapidly computable for implementation in stellar evolution software and to capture a wide range of relevant phenomena with just a single free parameter, namely the mixing length. We incorporate magnetic, rotational, baroclinic, and buoyancy effects exactly within the formalism of linear growth theories with non-linear decay. We treat differential rotation effects perturbatively in the corotating frame using a novel controlled approximation, which matches the time evolution of the reference frame to arbitrary order. We then implement this model in an efficient open source code and discuss the resulting turbulent stresses and transport coefficients. We demonstrate that this model exhibits convective, baroclinic, and shear instabilities as well as the magnetorotational instability. It also exhibits non-linear saturation behaviour, and we use this to extract the asymptotic scaling of various transport coefficients in physically interesting limits.
Effects of energy chirp on bunch length measurement in linear accelerator beams
NASA Astrophysics Data System (ADS)
Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.
2017-08-01
The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.
Kim, Tae-Woo; Kim, Woojae; Park, Kyu Hyung; Kim, Pyosang; Cho, Jae-Won; Shimizu, Hideyuki; Iyoda, Masahiko; Kim, Dongho
2016-02-04
Exciton dynamics in π-conjugated molecular systems is highly susceptible to conformational disorder. Using time-resolved and single-molecule spectroscopic techniques, the effect of chain length on the exciton dynamics in a series of linear oligothiophenes, for which the conformational disorder increased with increasing chain length, was investigated. As a result, extraordinary features of the exciton dynamics in longer-chain oligothiophene were revealed. Ultrafast fluorescence depolarization processes were observed due to exciton self-trapping in longer and bent chains. Increase in exciton delocalization during dynamic planarization processes was also observed in the linear oligothiophenes via time-resolved fluorescence spectra but was restricted in L-10T because of its considerable conformational disorder. Exciton delocalization was also unexpectedly observed in a bent chain using single-molecule fluorescence spectroscopy. Such delocalization modulates the fluorescence spectral shape by attenuating the 0-0 peak intensity. Collectively, these results provide significant insights into the exciton dynamics in conjugated polymers.
Can we detect a nonlinear response to temperature in European plant phenology?
NASA Astrophysics Data System (ADS)
Jochner, Susanne; Sparks, Tim H.; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C-1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ˜14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
Separated-orbit bisected energy-recovered linear accelerator
Douglas, David R.
2015-09-01
A separated-orbit bisected energy-recovered linear accelerator apparatus and method. The accelerator includes a first linac, a second linac, and a plurality of arcs of differing path lengths, including a plurality of up arcs, a plurality of downgoing arcs, and a full energy arc providing a path independent of the up arcs and downgoing arcs. The up arcs have a path length that is substantially a multiple of the RF wavelength and the full energy arc includes a path length that is substantially an odd half-integer multiple of the RF wavelength. Operation of the accelerator includes accelerating the beam utilizing the linacs and up arcs until the beam is at full energy, at full energy executing a full recirculation to the second linac using a path length that is substantially an odd half-integer of the RF wavelength, and then decelerating the beam using the linacs and downgoing arcs.
The all-fiber cladding-pumped Yb-doped gain-switched laser.
Larsen, C; Hansen, K P; Mattsson, K E; Bang, O
2014-01-27
Gain-switching is an alternative pulsing technique of fiber lasers, which is power scalable and has a low complexity. From a linear stability analysis of rate equations the relaxation oscillation period is derived and from it, the pulse duration is defined. Good agreement between the measured pulse duration and the theoretical prediction is found over a wide range of parameters. In particular we investigate the influence of an often present length of passive fiber in the cavity and show that it introduces a finite minimum in the achievable pulse duration. This minimum pulse duration is shown to occur at longer active fibers length with increased passive length of fiber in the cavity. The peak power is observed to depend linearly on the absorbed pump power and be independent of the passive fiber length. Given these conclusions, the pulse energy, duration, and peak power can be estimated with good precision.
Diffusion of isolated DNA molecules: dependence on length and topology.
Robertson, Rae M; Laib, Stephan; Smith, Douglas E
2006-05-09
The conformation and dynamics of circular polymers is a subject of considerable theoretical and experimental interest. DNA is an important example because it occurs naturally in different topological states, including linear, relaxed circular, and supercoiled circular forms. A fundamental question is how the diffusion coefficients of isolated polymers scale with molecular length and how they vary for different topologies. Here, diffusion coefficients D for relaxed circular, supercoiled, and linear DNA molecules of length L ranging from approximately 6 to 290 kbp were measured by tracking the Brownian motion of single molecules. A topology-independent scaling law D approximately L(-nu) was observed with nu(L) = 0.571 +/- 0.014, nu(C) = 0.589 +/- 0.018, and nu(S) = 0.571 +/- 0.057 for linear, relaxed circular, and supercoiled DNA, respectively, in good agreement with the scaling exponent of nu congruent with 0.588 predicted by renormalization group theory for polymers with significant excluded volume interactions. Our findings thus provide evidence in support of several theories that predict an effective diameter of DNA much greater than the Debye screening length. In addition, the measured ratio D(Circular)/D(Linear) = 1.32 +/- 0.014 was closer to the value of 1.45 predicted by using renormalization group theory than the value of 1.18 predicted by classical Kirkwood hydrodynamic theory and agreed well with a value of 1.31 predicted when incorporating a recently proposed expression for the radius of gyration of circular polymers into the Zimm model.
Two-dimensional linear and nonlinear Talbot effect from rogue waves.
Zhang, Yiqi; Belić, Milivoj R; Petrović, Milan S; Zheng, Huaibin; Chen, Haixia; Li, Changbiao; Lu, Keqing; Zhang, Yanpeng
2015-03-01
We introduce two-dimensional (2D) linear and nonlinear Talbot effects. They are produced by propagating periodic 2D diffraction patterns and can be visualized as 3D stacks of Talbot carpets. The nonlinear Talbot effect originates from 2D rogue waves and forms in a bulk 3D nonlinear medium. The recurrences of an input rogue wave are observed at the Talbot length and at the half-Talbot length, with a π phase shift; no other recurrences are observed. Differing from the nonlinear Talbot effect, the linear effect displays the usual fractional Talbot images as well. We also find that the smaller the period of incident rogue waves, the shorter the Talbot length. Increasing the beam intensity increases the Talbot length, but above a threshold this leads to a catastrophic self-focusing phenomenon which destroys the effect. We also find that the Talbot recurrence can be viewed as a self-Fourier transform of the initial periodic beam that is automatically performed during propagation. In particular, linear Talbot effect can be viewed as a fractional self-Fourier transform, whereas the nonlinear Talbot effect can be viewed as the regular self-Fourier transform. Numerical simulations demonstrate that the rogue-wave initial condition is sufficient but not necessary for the observation of the effect. It may also be observed from other periodic inputs, provided they are set on a finite background. The 2D effect may find utility in the production of 3D photonic crystals.
Canal, Nelson A.; Hernández-Ortiz, Vicente; Salas, Juan O. Tigrero; Selivon, Denise
2015-01-01
Abstract The occurrence of cryptic species among economically important fruit flies strongly affects the development of management tactics for these pests. Tools for studying cryptic species not only facilitate evolutionary and systematic studies, but they also provide support for fruit fly management and quarantine activities. Previous studies have shown that the South American fruit fly, Anastrepha fraterculus, is a complex of cryptic species, but few studies have been performed on the morphology of its immature stages. An analysis of mandible shape and linear morphometric variability was applied to third-instar larvae of five morphotypes of the Anastrepha fraterculus complex: Mexican, Andean, Ecuadorian, Peruvian and Brazilian-1. Outline geometric morphometry was used to study the mouth hook shape and linear morphometry analysis was performed using 24 linear measurements of the body, cephalopharyngeal skeleton, mouth hook and hypopharyngeal sclerite. Different morphotypes were grouped accurately using canonical discriminant analyses of both the geometric and linear morphometry. The shape of the mandible differed among the morphotypes, and the anterior spiracle length, number of tubules of the anterior spiracle, length and height of the mouth hook and length of the cephalopharyngeal skeleton were the most significant variables in the linear morphometric analysis. Third-instar larvae provide useful characters for studies of cryptic species in the Anastrepha fraterculus complex. PMID:26798253
Vandevijvere, Stefanie; Mackenzie, Tara; Mhurchu, Cliona Ni
2017-04-26
In-store availability of healthy and unhealthy foods may influence consumer purchases. Methods used to measure food availability, however, vary widely. A simple, valid, and reliable indicator to collect comparable data on in-store food availability is needed. Cumulative linear shelf length of and variety within 22 healthy and 28 unhealthy food groups, determined based on a comparison of three nutrient profiling systems, were measured in 15 New Zealand supermarkets. Inter-rater reliability was tested in one supermarket by a second researcher. The construct validity of five simple indicators of relative availability of healthy versus unhealthy foods was assessed against this 'gold standard'. Cumulative linear shelf length was a more sensitive and feasible measure of food availability than variety. Four out of five shelf length ratio indicators were significantly associated with the gold standard (ρ = 0.70-0.75). Based on a non-significant difference from the 'gold standard' (d = 0.053 ± 0.040) and feasibility, the ratio of cumulative linear shelf length of fresh and frozen fruits and vegetables versus soft and energy drinks, crisps and snacks, sweet biscuits and confectionery performed best for use in New Zealand supermarkets. Four out of the five shelf length ratio indicators of the relative availability of healthy versus unhealthy foods in-store tested could be used for future research and monitoring, but additional validation studies in other settings and countries are recommended. Consistent use of those shelf length ratio indicators could enhance comparability of supermarket food availability between studies, and help inform policies to create healthy consumer food retail environments.
Cephalometric features in isolated growth hormone deficiency.
Oliveira-Neto, Luiz Alves; Melo, Mariade de Fátima B; Franco, Alexandre A; Oliveira, Alaíde H A; Souza, Anita H O; Valença, Eugênia H O; Britto, Isabela M P A; Salvatori, Roberto; Aguiar-Oliveira, Manuel H
2011-07-01
To analyze cephalometric features in adults with isolated growth hormone (GH) deficiency (IGHD). Nine adult IGHD individuals (7 males and 2 females; mean age, 37.8 ± 13.8 years) underwent a cross-sectional cephalometric study, including 9 linear and 5 angular measurements. Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were calculated. To pool cephalometric measurements in both genders, results were normalized by standard deviation scores (SDS), using the population means from an atlas of the normal Brazilian population. All linear measurements were reduced in IGHD subjects. Total maxillary length was the most reduced parameter (-6.5 ± 1.7), followed by a cluster of six measurements: posterior cranial base length (-4.9 ± 1.1), total mandibular length (-4.4 ± 0.7), total posterior facial height (-4.4 ± 1.1), total anterior facial height (-4.3 ± 0.9), mandibular corpus length (-4.2 ± 0.8), and anterior cranial base length (-4.1 ± 1.7). Less affected measurements were lower-anterior facial height (-2.7 ± 0.7) and mandibular ramus height (-2.5 ± 1.5). SDS angular measurements were in the normal range, except for increased gonial angle (+2.5 ± 1.1). Posterior facial height/anterior facial height and lower-anterior facial height/anterior facial height ratios were not different from those of the reference group. Congenital, untreated IGHD causes reduction of all linear measurements of craniofacial growth, particularly total maxillary length. Angular measurements and facial height ratios are less affected, suggesting that lGHD causes proportional blunting of craniofacial growth.
Landsat Thematic Mapper monitoring of turbid inland water quality
NASA Technical Reports Server (NTRS)
Lathrop, Richard G., Jr.
1992-01-01
This study reports on an investigation of water quality calibration algorithms under turbid inland water conditions using Landsat Thematic Mapper (TM) multispectral digital data. TM data and water quality observations (total suspended solids and Secchi disk depth) were obtained near-simultaneously and related using linear regression techniques. The relationships between reflectance and water quality for Green Bay and Lake Michigan were compared with results for Yellowstone and Jackson Lakes, Wyoming. Results show similarities in the water quality-reflectance relationships, however, the algorithms derived for Green Bay - Lake Michigan cannot be extrapolated to Yellowstone and Jackson Lake conditions.
NASA Astrophysics Data System (ADS)
Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.
2018-01-01
The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.
Controlled experiments in cosmological gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
A systematic study is conducted of gravitational instability in 3D on the basis of power-law initial spectra with and without spectral cutoff, emphasizing nonlinear effects and measures of nonlinearity; effects due to short and long waves in the initial conditions are separated. The existence of second-general pancakes is confirmed, and it is noted that while these are inhomogeneous, they generate a visually strong signal of filamentarity. An explicit comparison of smoothed initial conditions with smoothed envelope models also reconfirms the need to smooth over a scale larger than any nonlinearity, in order to extrapolate directly by linear theory from Gaussian initial conditions.
Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.
2016-01-21
Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.
Measurements and predictions of the 6s6p{sup 1,3}P{sub 1} lifetimes in the Hg isoelectronic sequence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L. J.; Irving, R. E.; Henderson, M.
2001-04-01
Experimental and theoretical values for the lifetimes of the 6s6p{sup 1}P{sub 1} and {sup 3}P{sub 1} levels in the Hg isoelectronic sequence are examined in the context of a data-based isoelectronic systematization. New beam-foil measurements for lifetimes in Pb III and Bi IV are reported and included in a critical evaluation of the available database. These results are combined with ab initio theoretical calculations and linearizing parametrizations to make predictive extrapolations for ions with 84{<=}Z{le}92.
Sputtering of cobalt and chromium by argon and xenon ions near the threshold energy region
NASA Technical Reports Server (NTRS)
Handoo, A. K.; Ray, P. K.
1993-01-01
Sputtering yields of cobalt and chromium by argon and xenon ions with energies below 50 eV are reported. The targets were electroplated on copper substrates. Measurable sputtering yields were obtained from cobalt with ion energies as low as 10 eV. The ion beams were produced by an ion gun. A radioactive tracer technique was used for the quantitative measurement of the sputtering yield. Co-57 and Cr-51 were used as tracers. The yield-energy curves are observed to be concave, which brings into question the practice of finding threshold energies by linear extrapolation.
Galvão, B R L; Rodrigues, S P J; Varandas, A J C
2008-07-28
A global ab initio potential energy surface is proposed for the water molecule by energy-switching/merging a highly accurate isotope-dependent local potential function reported by Polyansky et al. [Science 299, 539 (2003)] with a global form of the many-body expansion type suitably adapted to account explicitly for the dynamical correlation and parametrized from extensive accurate multireference configuration interaction energies extrapolated to the complete basis set limit. The new function mimics also the complicated Sigma/Pi crossing that arises at linear geometries of the water molecule.
Tidal evolution of the Galilean satellites - A linearized theory
NASA Technical Reports Server (NTRS)
Greenberg, R.
1981-01-01
The Laplace resonance among the Galilean satellites Io, Europa, and Ganymede is traditionally reduced to a pendulum-like dynamical problem by neglecting short-period variations of several orbital elements. However, some of these variations that can now be neglected may once have had longer periods, comparable to the 'pendulum' period, if the system was formerly in deep resonance (pairs of periods even closer to the ratio 2:1 than they are now). In that case, the dynamical system cannot be reduced to fewer than nine dimensions. The nine-dimensional system is linearized here in order to study small variations about equilibrium. When tidal effects are included, the resulting evolution is substantially the same as was indicated by the pendulum approach, except that evolution out of deep resonance is found to be somewhat slower than suggested by extrapolation of the pendulum results. This slower rate helps support the hypothesis that the system may have evolved from deep resonance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haustein, P.E.; Brenner, D.S.; Casten, R.F.
1987-12-10
A new semi-empirical method, based on the use of the P-factor (P = N/sub p/N/sub n//(N/sub p/+N/sub n/)), is shown to simplify significantly the systematics of atomic masses. Its uses is illustrated for actinide nuclei where complicated patterns of mass systematics seen in traditional plots versus Z, N, or isospin are consolidated and transformed into linear ones extending over long isotopic and isotonic sequences. The linearization of the systematics by this procedure provides a simple basis for mass prediction. For many unmeasured nuclei beyond the known mass surface, the P-factor method operates by interpolation among data for known nuclei rathermore » than by extrapolation, as is common in other mass models.« less
Hysteresis between coral reef calcification and the seawater aragonite saturation state
NASA Astrophysics Data System (ADS)
McMahon, Ashly; Santos, Isaac R.; Cyronak, Tyler; Eyre, Bradley D.
2013-09-01
predictions of how ocean acidification (OA) will affect coral reefs assume a linear functional relationship between the ambient seawater aragonite saturation state (Ωa) and net ecosystem calcification (NEC). We quantified NEC in a healthy coral reef lagoon in the Great Barrier Reef during different times of the day. Our observations revealed a diel hysteresis pattern in the NEC versus Ωa relationship, with peak NEC rates occurring before the Ωa peak and relatively steady nighttime NEC in spite of variable Ωa. Net ecosystem production had stronger correlations with NEC than light, temperature, nutrients, pH, and Ωa. The observed hysteresis may represent an overlooked challenge for predicting the effects of OA on coral reefs. If widespread, the hysteresis could prevent the use of a linear extrapolation to determine critical Ωa threshold levels required to shift coral reefs from a net calcifying to a net dissolving state.
[Medical and biological consequences of nuclear disasters].
Stalpers, Lukas J A; van Dullemen, Simon; Franken, N A P Klaas
2012-01-01
Medical risks of radiation exaggerated; psychological risks underestimated. The discussion about atomic energy has become topical again following the nuclear accident in Fukushima. There is some argument about the gravity of medical and biological consequences of prolonged exposure to radiation. The risk of cancer following a low dose of radiation is usually estimated by linear extrapolation of the incidence of cancer among survivors of the atomic bombs dropped on Hiroshima and Nagasaki in 1945. The radiobiological linear-quadratic model (LQ-model) gives a more accurate description of observed data, is radiobiologically more plausible and is better supported by experimental and clinical data. On the basis of this model there is less risk of cancer being induced following radiation exposure. The gravest consequence of Chernobyl and Fukushima is not the medical and biological damage, but the psychological and economical impact on rescue workers and former inhabitants.
A Kalman filter for a two-dimensional shallow-water model
NASA Technical Reports Server (NTRS)
Parrish, D. F.; Cohn, S. E.
1985-01-01
A two-dimensional Kalman filter is described for data assimilation for making weather forecasts. The filter is regarded as superior to the optimal interpolation method because the filter determines the forecast error covariance matrix exactly instead of using an approximation. A generalized time step is defined which includes expressions for one time step of the forecast model, the error covariance matrix, the gain matrix, and the evolution of the covariance matrix. Subsequent time steps are achieved by quantifying the forecast variables or employing a linear extrapolation from a current variable set, assuming the forecast dynamics are linear. Calculations for the evolution of the error covariance matrix are banded, i.e., are performed only with the elements significantly different from zero. Experimental results are provided from an application of the filter to a shallow-water simulation covering a 6000 x 6000 km grid.
Applying Occam's Razor To The Proton Radius Puzzle
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas
2016-09-01
Over the past five decades, ever more complex mathematical functions have been used to extract the radius of the proton from electron scattering data. For example, in 1963 the proton radius was extracted with linear and quadratic fits of low Q2 data (< 3 fm-2) and by 2014 a non-linear regression of two tenth order power series functions with thirty-one normalization parameters and data out to 25 fm-2 was used. But for electron scattering, the radius of the proton is determined by extracting the slope of the charge form factor at a Q2 of zero. By using higher precision data than was available in 1963 and focusing on the low Q2 data from 1974 to today, we find extrapolating functions consistently produce a proton radius of around 0.84 fm. A result that is in agreement with modern Lamb shift measurements.
NASA Astrophysics Data System (ADS)
Green, Jonathan; Schmitz, Oliver; Severn, Greg; van Ruremonde, Lars; Winters, Victoria
2017-10-01
The MARIA device at the UW-Madison is used primarily to investigate the dynamics and fueling of neutral particles in helicon discharges. A new systematic method is in development to measure key plasma and neutral particle parameters by spectroscopic methods. The setup relies on spectroscopic line ratios for investigating basic plasma parameters and extrapolation to other states using a collisional radiative model. Active pumping using a Nd:YAG pumped dye laser is used to benchmark and correct the underlying atomic data for the collisional radiative model. First results show a matching linear dependence between electron density and laser induced fluorescence on the magnetic field above 500G. This linear dependence agrees with the helicon dispersion relation and implies MARIA can reliably support the helicon mode and support future measurements. This work was funded by the NSF CAREER award PHY-1455210.
Image enhancement by non-linear extrapolation in frequency space
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)
1998-01-01
An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.
Interpretation guidelines of a standard Y-chromosome STR 17-plex PCR-CE assay for crime casework.
Roewer, Lutz; Geppert, Maria
2012-01-01
Y-STR analysis is an invaluable tool to examine evidence in sexual assault cases and in other forensic casework. Unambiguous detection of the male component in DNA mixtures with a high female background is still the main field of application of forensic Y-STR haplotyping. In the last years, powerful technologies including a 17-locus multiplex PCR assay have been introduced in the forensic laboratories. At the same time, statistical methods have been developed and adapted for interpretation of a nonrecombining, linear marker as the Y-chromosome which shows a strongly clustered geographical distribution due to the linear inheritance and the patrilocality of ancestral groups. Large population databases, namely the Y-STR Haplotype Reference Database (YHRD), have been established to assess the evidentiary value of Y-STR matches by means of frequency estimation methods (counting and extrapolation).
Visual memory transformations in dyslexia.
Barnes, James; Hinkley, Lisa; Masters, Stuart; Boubert, Laura
2007-06-01
Representational Momentum refers to observers' distortion of recognition memory for pictures that imply motion because of an automatic mental process which extrapolates along the implied trajectory of the picture. Neuroimaging evidence suggests that activity in the magnocellular visual pathway is necessary for representational momentum to occur. It has been proposed that individuals with dyslexia have a magnocellular deficit, so it was hypothesised that these individuals would show reduced or absent representational momentum. In this study, 30 adults with dyslexia and 30 age-matched controls were compared on two tasks, one linear and one rotation, which had previously elicited the representational momentum effect. Analysis indicated significant differences in the performance of the two groups, with the dyslexia group having a reduced susceptibility to representational momentum in both linear and rotational directions. The findings highlight that deficits in temporal spatial processing may contribute to the perceptual profile of dyslexia.
Studies on the Parametric Effects of Plasma Arc Welding of 2205 Duplex Stainless Steel
NASA Astrophysics Data System (ADS)
Selva Bharathi, R.; Siva Shanmugam, N.; Murali Kannan, R.; Arungalai Vendan, S.
2018-03-01
This research study attempts to create an optimized parametric window by employing Taguchi algorithm for Plasma Arc Welding (PAW) of 2 mm thick 2205 duplex stainless steel. The parameters considered for experimentation and optimization are the welding current, welding speed and pilot arc length respectively. The experimentation involves the parameters variation and subsequently recording the depth of penetration and bead width. Welding current of 60-70 A, welding speed of 250-300 mm/min and pilot arc length of 1-2 mm are the range between which the parameters are varied. Design of experiments is used for the experimental trials. Back propagation neural network, Genetic algorithm and Taguchi techniques are used for predicting the bead width, depth of penetration and validated with experimentally achieved results which were in good agreement. Additionally, micro-structural characterizations are carried out to examine the weld quality. The extrapolation of these optimized parametric values yield enhanced weld strength with cost and time reduction.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; ...
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
In Silico Measurements of Twist and Bend Moduli for β-Solenoid Protein Self-Assembly Units.
Heinz, Leonard P; Ravikumar, Krishnakumar M; Cox, Daniel L
2015-05-13
We compute potentials of mean force for bend and twist deformations via force pulling and umbrella sampling experiments for four β-solenoid proteins (BSPs) that show promise in nanotechnology applications. In all cases, we find quasi-Hooke's law behavior until the point of rupture. Bending moduli show modest anisotropy for two-sided and three-sided BSPs, and little anisotropy for a four-sided BSP. There is a slight clockwise/counterclockwise asymmetry in the twist potential of mean force, showing greater stiffness when the applied twist follows the intrinsic twist. When we extrapolate to beam theory appropriate for amyloid fibrils of the BSPs, we find bend/twist moduli which are somewhat smaller than those in the literature for other amyloid fibrils. Twist persistence lengths are on the order of a micron, and bend persistence lengths are several microns. Provided the intrinsic twist can be reversed, these results support the usage of BSPs in biomaterials applications.
Effects of the internal friction and the solvent quality on the dynamics of a polymer chain closure.
Yu, Wancheng; Luo, Kaifu
2015-03-28
Using 3D Langevin dynamics simulations, we investigate the effects of the internal friction and the solvent quality on the dynamics of a polymer chain closure. We show that the chain closure in good solvents is a purely diffusive process. By extrapolation to zero solvent viscosity, we find that the internal friction of a chain plays a non-ignorable role in the dynamics of the chain closure. When the solvent quality changes from good to poor, the mean closure time τc decreases by about 1 order of magnitude for the chain length 20 ≤ N ≤ 100. Furthermore, τc has a minimum as a function of the solvent quality. With increasing the chain length N, the minimum of τc occurs at a better solvent. Finally, the single exponential distributions of the closure time in poor solvents suggest that the negative excluded volume of segments does not alter the nearly Poisson statistical characteristics of the process of the chain closure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veres, P.; Dermer, C. D.; Dhuga, K. S.
The magnetic field in intergalactic space gives important information about magnetogenesis in the early universe. The properties of this field can be probed by searching for radiation of secondary e {sup +} e {sup −} pairs created by TeV photons that produce GeV range radiation by Compton-scattering cosmic microwave background photons. The arrival times of the GeV “echo” photons depend strongly on the magnetic field strength and coherence length. A Monte Carlo code that accurately treats pair creation is developed to simulate the spectrum and time-dependence of the echo radiation. The extrapolation of the spectrum of powerful gamma-ray bursts (GRBs)more » like GRB 130427A to TeV energies is used to demonstrate how the intergalactic magnetic field can be constrained if it falls in the 10{sup −21}–10{sup −17} G range for a 1 Mpc coherence length.« less
Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan
2012-12-01
A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tyagi, Chetna; Yadav, Preeti; Sharma, Ambika
2018-05-01
The present work reveals the optical study of Se82Te15Bi1.0Sn2.0/polyvinylpyrrolidone (PVP) nanocomposites. Bulk glasses of chalcogenide was prepared by well-known melt quenching technique. Wet chemical technique is proposed for making the composite of Se82Te15Bi1.0Sn2.0 and PVP polymer as it is easy to handle and cost effective. The composites films were made on glass slide from the solution of Se-Te-Bi-Sn and PVP polymer using spin coating technique. The transmission as well as absorbance is recorded by using UV-Vis-NIR spectrophotometer in the spectral range 350-700 nm. The linear refractive index (n) of polymer nanocomposites are calculated by Swanepoel approach. The linear refractive index (n) PVP doped Se82Te15Bi1.0Sn2.0 chalcogenide is found to be 1.7. The optical band gap has been evaluated by means of Tauc extrapolation method. Tichy and Ticha model was utilized for the characterization of nonlinear refractive index (n2).
Bilinear modeling and nonlinear estimation
NASA Technical Reports Server (NTRS)
Dwyer, Thomas A. W., III; Karray, Fakhreddine; Bennett, William H.
1989-01-01
New methods are illustrated for online nonlinear estimation applied to the lateral deflection of an elastic beam on board measurements of angular rates and angular accelerations. The development of the filter equations, together with practical issues of their numerical solution as developed from global linearization by nonlinear output injection are contrasted with the usual method of the extended Kalman filter (EKF). It is shown how nonlinear estimation due to gyroscopic coupling can be implemented as an adaptive covariance filter using off-the-shelf Kalman filter algorithms. The effect of the global linearization by nonlinear output injection is to introduce a change of coordinates in which only the process noise covariance is to be updated in online implementation. This is in contrast to the computational approach which arises in EKF methods arising by local linearization with respect to the current conditional mean. Processing refinements for nonlinear estimation based on optimal, nonlinear interpolation between observations are also highlighted. In these methods the extrapolation of the process dynamics between measurement updates is obtained by replacing a transition matrix with an operator spline that is optimized off-line from responses to selected test inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Aydogan, B.
In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less
Combination of Eight Alleles at Four Quantitative Trait Loci Determines Grain Length in Rice
Zeng, Yuxiang; Ji, Zhijuan; Wen, Zhihua; Liang, Yan; Yang, Changdeng
2016-01-01
Grain length is an important quantitative trait in rice (Oryza sativa L.) that influences both grain yield and exterior quality. Although many quantitative trait loci (QTLs) for grain length have been identified, it is still unclear how different alleles from different QTLs regulate grain length coordinately. To explore the mechanisms of QTL combination in the determination of grain length, five mapping populations, including two F2 populations, an F3 population, an F7 recombinant inbred line (RIL) population, and an F8 RIL population, were developed from the cross between the U.S. tropical japonica variety ‘Lemont’ and the Chinese indica variety ‘Yangdao 4’ and grown under different environmental conditions. Four QTLs (qGL-3-1, qGL-3-2, qGL-4, and qGL-7) for grain length were detected using both composite interval mapping and multiple interval mapping methods in the mapping populations. In each locus, there was an allele from one parent that increased grain length and another allele from another parent that decreased it. The eight alleles in the four QTLs were analyzed to determine whether these alleles act additively across loci, and lead to a linear relationship between the predicted breeding value of QTLs and phenotype. Linear regression analysis suggested that the combination of eight alleles determined grain length. Plants carrying more grain length-increasing alleles had longer grain length than those carrying more grain length-decreasing alleles. This trend was consistent in all five mapping populations and demonstrated the regulation of grain length by the four QTLs. Thus, these QTLs are ideal resources for modifying grain length in rice. PMID:26942914
NASA Astrophysics Data System (ADS)
Berdnikov, Y.; Zhiglinsky, A. A.; Rylkova, M. V.; Dubrovskii, V. G.
2017-11-01
We present a model for kinetic broadening effects on the length distributions of Au-catalyzed III-V nanowires obtained in the growth regime with adatom diffusion from the substrate and the nanowire sidewalls to the top. We observe three different regimes for the length distribution evolution with time. For short growth times, the length distribution is sub-Poissonian, converting to broader than Poissonian with increasing the mean length above a certain threshold value. After the diffusion flux from the nanowire sidewalls has stabilized, the length distribution variance increases linearly with the mean length, as in the Poissonian process.
Synthesis and Electronic Properties of Length-Defined 9,10-Anthrylene-Butadiynylene Oligomers.
Nagaoka, Maiko; Tsurumaki, Eiji; Nishiuchi, Mai; Iwanaga, Tetsuo; Toyota, Shinji
2018-05-18
Linear π-conjugated oligomers consisting of anthracene and diacetylene units were readily synthesized by a one-pot process that involved desilylation and oxidative coupling from appropriate building units. We were able to isolate length-defined oligomers ranging from 2-mer to 6-mer as stable and soluble solids. The bathochromic shifts in the UV-vis spectra suggested that the π-conjugation was extended with elongation of the linear chain. Cyclic voltammetric measurements showed characteristic reversible oxidation waves that were dependent on the number of anthracene units.
The forecast for RAC extrapolation: mostly cloudy.
Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie
2011-09-01
The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.
Guo, Zongxia; Wang, Kun; Yu, Ping; Wang, Xiangnan; Lan, Shusha; Sun, Kai; Yi, Yuanping; Li, Zhibo
2017-11-02
The effect of the length of linear alkyl chains substituted at imine positions on the assembly of tetrachlorinated perylene bisimides (1: PBI with -C 6 H 13 ; 2: PBI with -C 12 H 25 ) has been investigated. Solvent-induced assembly was performed in solutions of THF and methanol with varying volume ratios. Morphological (SEM, AFM, and TEM) and spectral (UV/Vis, fluorescence, FTIR, and XRD) methods were used to characterize the assembled nanostructures and the molecular arrangement in the aggregates. It was found that uniform structures could be obtained for both molecules in solutions with a high ratio of methanol. PBI 1 formed rigid nanosheets, whereas 2 assembled into longer nanostripes with a high ratio of length to width. On combining the morphological data with the spectral data, it was suggested that π-π stacking predominated in assemblies of 1, and the synergetic effect of van der Waals interactions from the long alkyl chains and π-π stacking between neighboring building blocks facilitated the growth of the long-range-ordered nanostructures of 2. By changing the linear chain length, the hierarchical assembly of PBIs modified on bay positions could be manipulated effectively. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Parabolic tapers for overmoded waveguides
Doane, J.L.
1983-11-25
A waveguide taper with a parabolic profile, in which the distance along the taper axis varies as the square of the tapered dimension, provides less mode conversion than equal length linear tapers and is easier to fabricate than other non-linear tapers.
Pei, Jiquan; Han, Steve; Liao, Haijun; Li, Tao
2014-01-22
A highly efficient and simple-to-implement Monte Carlo algorithm is proposed for the evaluation of the Rényi entanglement entropy (REE) of the quantum dimer model (QDM) at the Rokhsar-Kivelson (RK) point. It makes possible the evaluation of REE at the RK point to the thermodynamic limit for a general QDM. We apply the algorithm to a QDM defined on the triangular and the square lattice in two dimensions and the simple and the face centered cubic (fcc) lattice in three dimensions. We find the REE on all these lattices follows perfect linear scaling in the thermodynamic limit, apart from an even-odd oscillation in the case of the square lattice. We also evaluate the topological entanglement entropy (TEE) with both a subtraction and an extrapolation procedure. We find the QDMs on both the triangular and the fcc lattice exhibit robust Z2 topological order. The expected TEE of ln2 is clearly demonstrated in both cases. Our large scale simulation also proves the recently proposed extrapolation procedure in cylindrical geometry to be a highly reliable way to extract the TEE of a topologically ordered system.
Dielectric relaxation spectrum of undiluted poly(4-chlorostyrene), T≳Tg
NASA Astrophysics Data System (ADS)
Yoshihara, M.; Work, R. N.
1980-06-01
Dielectric relaxation characteristics of undiluted, atactic poly(4-chlorostyrene), P4CS, have been determined at temperatures 406 K⩽T⩽446 K from measurements made at frequencies 0.2 Hz⩽f⩽0.2 MHz. After effects of electrical conductivity are subtracted, it is found that the normalized complex dielectric constant K*=K'-i K″ can be represented quantitatively by the Havriliak-Negami (H-N) equation K*=[1+(iωτ0)1-α]-β, 0⩽α, β⩽1, except for a small, high frequency tail that appears in measurements made near the glass transition temperature, Tg. The parameter β is nearly constant, and α depends linearly on log τ0, where τ0 is a characteristic relaxation time. The parameters α and β extrapolate through values obtained from published data from P4CS solutions, and extrapolation to α=0 yields a value of τ0 which compares favorably with a published value for crankshaft motions of an equivalent isolated chain segment. These observations suggest that β may characterize effects of chain connectivity and α may describe effects of interactions of the surroundings with the chain. Experimental results are compared with alternative empirical and model-based representations of dielectric relaxation in polymers.
NASA Astrophysics Data System (ADS)
Sasaki, K.; Kikuchi, S.
2014-10-01
In this work, we compared the sticking probabilities of Cu, Zn, and Sn atoms in magnetron sputtering deposition of CZTS films. The evaluations of the sticking probabilities were based on the temporal decays of the Cu, Zn, and Sn densities in the afterglow, which were measured by laser-induced fluorescence spectroscopy. Linear relationships were found between the discharge pressure and the lifetimes of the atom densities. According to Chantry, the sticking probability is evaluated from the extrapolated lifetime at the zero pressure, which is given by 2l0 (2 - α) / (v α) with α, l0, and v being the sticking probability, the ratio between the volume and the surface area of the chamber, and the mean velocity, respectively. The ratio of the extrapolated lifetimes observed experimentally was τCu :τSn :τZn = 1 : 1 . 3 : 1 . This ratio coincides well with the ratio of the reciprocals of their mean velocities (1 /vCu : 1 /vSn : 1 /vZn = 1 . 00 : 1 . 37 : 1 . 01). Therefore, the present experimental result suggests that the sticking probabilities of Cu, Sn, and Zn are roughly the same.
Petit, Caroline; Samson, Adeline; Morita, Satoshi; Ursino, Moreno; Guedj, Jérémie; Jullien, Vincent; Comets, Emmanuelle; Zohar, Sarah
2018-06-01
The number of trials conducted and the number of patients per trial are typically small in paediatric clinical studies. This is due to ethical constraints and the complexity of the medical process for treating children. While incorporating prior knowledge from adults may be extremely valuable, this must be done carefully. In this paper, we propose a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults. The dose-range is calculated under three extrapolation options, linear, allometry and maturation adjustment, using adult pharmacokinetic data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships are obtained using early-phase adult toxicity and efficacy data at several dose levels. Priors are integrated into the dose-finding process through Bayesian model selection or adaptive priors. This calibrates the model to adjust for misspecification, if the adult and pediatric data are very different. We performed a simulation study which indicates that incorporating prior adult information in this way may improve dose selection in children.
Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data
NASA Astrophysics Data System (ADS)
Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.
2017-12-01
We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).
Shang, Chao; Rice, James A.; Eberl, Dennis D.; Lin, Jar-Shyong
2003-01-01
It has been suggested that interstratified illite-smectite (I-S) minerals are composed of aggregates of fundamental particles. Many attempts have been made to measure the thickness of such fundamental particles, but each of the methods used suffers from its own limitations and uncertainties. Small-angle X-ray scattering (SAXS) can be used to measure the thickness of particles that scatter X-rays coherently. We used SAXS to study suspensions of Na-rectorite and other illites with varying proportions of smectite. The scattering intensity (I) was recorded as a function of the scattering vector, q = (4 /) sin(/2), where is the X-ray wavelength and is the scattering angle. The experimental data were treated with a direct Fourier transform to obtain the pair distance distribution function (PDDF) that was then used to determine the thickness of illite particles. The Guinier and Porod extrapolations were used to obtain the scattering intensity beyond the experimental q, and the effects of such extrapolations on the PDDF were examined. The thickness of independent rectorite particles (used as a reference mineral) is 18.3 Å. The SAXS results are compared with those obtained by X-ray diffraction peak broadening methods. It was found that the power-law exponent (α) obtained by fitting the data in the region of q = 0.1-0.6 nm-1 to the power law (I = I0q-α) is a linear function of illite particle thickness. Therefore, illite particle thickness could be predicted by the linear relationship as long as the thickness is within the limit where α <4.0.
Daily air temperature interpolated at high spatial resolution over a large mountainous region
Dodson, R.; Marks, D.
1997-01-01
Two methods are investigated for interpolating daily minimum and maximum air temperatures (Tmin and Tmax) at a 1 km spatial resolution over a large mountainous region (830 000 km2) in the U.S. Pacific Northwest. The methods were selected because of their ability to (1) account for the effect of elevation on temperature and (2) efficiently handle large volumes of data. The first method, the neutral stability algorithm (NSA), used the hydrostatic and potential temperature equations to convert measured temperatures and elevations to sea-level potential temperatures. The potential temperatures were spatially interpolated using an inverse-squared-distance algorithm and then mapped to the elevation surface of a digital elevation model (DEM). The second method, linear lapse rate adjustment (LLRA), involved the same basic procedure as the NSA, but used a constant linear lapse rate instead of the potential temperature equation. Cross-validation analyses were performed using the NSA and LLRA methods to interpolate Tmin and Tmax each day for the 1990 water year, and the methods were evaluated based on mean annual interpolation error (IE). The NSA method showed considerable bias for sites associated with vertical extrapolation. A correction based on climate station/grid cell elevation differences was developed and found to successfully remove the bias. The LLRA method was tested using 3 lapse rates, none of which produced a serious extrapolation bias. The bias-adjusted NSA and the 3 LLRA methods produced almost identical levels of accuracy (mean absolute errors between 1.2 and 1.3??C), and produced very similar temperature surfaces based on image difference statistics. In terms of accuracy, speed, and ease of implementation, LLRA was chosen as the best of the methods tested.
Improving CTIPe neutral density response and recovery during geomagnetic storms
NASA Astrophysics Data System (ADS)
Fedrizzi, M.; Fuller-Rowell, T. J.; Codrescu, M.; Mlynczak, M. G.; Marsh, D. R.
2013-12-01
The temperature of the Earth's thermosphere can be substantially increased during geomagnetic storms mainly due to high-latitude Joule heating induced by magnetospheric convection and auroral particle precipitation. Thermospheric heating increases atmospheric density and the drag on low-Earth orbiting satellites. The main cooling mechanism controlling the recovery of neutral temperature and density following geomagnetic activity is infrared emission from nitric oxide (NO) at 5.3 micrometers. NO is produced by both solar and auroral activity, the first due to solar EUV and X-rays the second due to dissociation of N2 by particle precipitation, and has a typical lifetime of 12 to 24 hours in the mid and lower thermosphere. NO cooling in the thermosphere peaks between 150 and 200 km altitude. In this study, a global, three-dimensional, time-dependent, non-linear coupled model of the thermosphere, ionosphere, plasmasphere, and electrodynamics (CTIPe) is used to simulate the response and recovery timescales of the upper atmosphere following geomagnetic activity. CTIPe uses time-dependent estimates of NO obtained from Marsh et al. [2004] empirical model based on Student Nitric Oxide Explorer (SNOE) satellite data rather than solving for minor species photochemistry self-consistently. This empirical model is based solely on SNOE observations, when Kp rarely exceeded 5. During conditions between Kp 5 and 9, a linear extrapolation has been used. In order to improve the accuracy of the extrapolation algorithm, CTIPe model estimates of global NO cooling have been compared with the NASA TIMED/SABER satellite measurements of radiative power at 5.3 micrometers. The comparisons have enabled improvement in the timescale for neutral density response and recovery during geomagnetic storms. CTIPe neutral density response and recovery rates are verified by comparison CHAMP satellite observations.
Murphy, A B
2004-01-01
A number of assessments of electron temperatures in atmospheric-pressure arc plasmas using Thomson scattering of laser light have recently been published. However, in this method, the electron temperature is perturbed due to strong heating of the electrons by the incident laser beam. This heating was taken into account by measuring the electron temperature as a function of the laser pulse energy, and linearly extrapolating the results to zero pulse energy to obtain an unperturbed electron temperature. In the present paper, calculations show that the laser heating process has a highly nonlinear dependence on laser power, and that the usual linear extrapolation leads to an overestimate of the electron temperature, typically by 5000 K. The nonlinearity occurs due to the strong dependence on electron temperature of the absorption of laser energy and of the collisional and radiative cooling of the heated electrons. There are further problems in deriving accurate electron temperatures from laser scattering due to necessary averages that have to be made over the duration of the laser pulse and over the finite volume from which laser light is scattered. These problems are particularly acute in measurements in which the laser beam is defocused in order to minimize laser heating; this can lead to the derivation of electron temperatures that are significantly greater than those existing anywhere in the scattering volume. It was concluded from the earlier Thomson scattering measurements that there were significant deviations from equilibrium between the electron and heavy-particle temperatures at the center of arc plasmas of industrial interest. The present calculations indicate that such deviations are only of the order of 1000 K in 20 000 K, so that the usual approximation that arc plasmas are approximately in local thermodynamic equilibrium still applies.
Statistical security for Social Security.
Soneji, Samir; King, Gary
2012-08-01
The financial viability of Social Security, the single largest U.S. government program, depends on accurate forecasts of the solvency of its intergenerational trust fund. We begin by detailing information necessary for replicating the Social Security Administration's (SSA's) forecasting procedures, which until now has been unavailable in the public domain. We then offer a way to improve the quality of these procedures via age- and sex-specific mortality forecasts. The most recent SSA mortality forecasts were based on the best available technology at the time, which was a combination of linear extrapolation and qualitative judgments. Unfortunately, linear extrapolation excludes known risk factors and is inconsistent with long-standing demographic patterns, such as the smoothness of age profiles. Modern statistical methods typically outperform even the best qualitative judgments in these contexts. We show how to use such methods, enabling researchers to forecast using far more information, such as the known risk factors of smoking and obesity and known demographic patterns. Including this extra information makes a substantial difference. For example, by improving only mortality forecasting methods, we predict three fewer years of net surplus, $730 billion less in Social Security Trust Funds, and program costs that are 0.66% greater for projected taxable payroll by 2031 compared with SSA projections. More important than specific numerical estimates are the advantages of transparency, replicability, reduction of uncertainty, and what may be the resulting lower vulnerability to the politicization of program forecasts. In addition, by offering with this article software and detailed replication information, we hope to marshal the efforts of the research community to include ever more informative inputs and to continue to reduce uncertainties in Social Security forecasts.
Spacing and length of passing sidings and the incremental capacity of single track.
DOT National Transportation Integrated Search
2016-02-18
The objective of this study is to evaluate the effect of initial siding spacing and distribution of siding length on the incremental capacity of infrastructure investments on single-track railway lines. Previous research has shown a linear reduction ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Xiaoxiao; Tian, Jingxuan; Wen, Weijia, E-mail: phwen@ust.hk
2016-04-18
We report a metasurface for focusing reflected ultrasonic waves over a wide frequency band of 0.45–0.55 MHz. The broadband focusing effect of the reflective metasurface is studied numerically and then confirmed experimentally using near-field scanning techniques. The focusing mechanism can be attributed to the hyperboloidal reflection phase profile imposed by different depths of concentric grooves on the metasurface. In particular, the focal lengths of the reflective metasurface are extracted from simulations and experiments, and both exhibit good linear dependence on frequency over the considered frequency band. The proposed broadband reflective metasurface with tunable focal length has potential applications in the broadmore » field of ultrasonics, such as ultrasonic tomographic imaging, high intensity focused ultrasound treatment, etc.« less
A proof of the Woodward-Lawson sampling method for a finite linear array
NASA Technical Reports Server (NTRS)
Somers, Gary A.
1993-01-01
An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.
NASA Technical Reports Server (NTRS)
Holloway, Sidney E., III; Crossley, Edward A.; Miller, James B.; Jones, Irby W.; Davis, C. Calvin; Behun, Vaughn D.; Goodrich, Lewis R., Sr.
1995-01-01
Linear proof-mass actuator (LPMA) is friction-driven linear mass actuator capable of applying controlled force to structure in outer space to damp out oscillations. Capable of high accelerations and provides smooth, bidirectional travel of mass. Design eliminates gears and belts. LPMA strong enough to be used terrestrially where linear actuators needed to excite or damp out oscillations. High flexibility designed into LPMA by varying size of motors, mass, and length of stroke, and by modifying control software.
DNA Sequencing Using capillary Electrophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. Barry Karger
2011-05-09
The overall goal of this program was to develop capillary electrophoresis as the tool to be used to sequence for the first time the Human Genome. Our program was part of the Human Genome Project. In this work, we were highly successful and the replaceable polymer we developed, linear polyacrylamide, was used by the DOE sequencing lab in California to sequence a significant portion of the human genome using the MegaBase multiple capillary array electrophoresis instrument. In this final report, we summarize our efforts and success. We began our work by separating by capillary electrophoresis double strand oligonucleotides using cross-linkedmore » polyacrylamide gels in fused silica capillaries. This work showed the potential of the methodology. However, preparation of such cross-linked gel capillaries was difficult with poor reproducibility, and even more important, the columns were not very stable. We improved stability by using non-cross linked linear polyacrylamide. Here, the entangled linear chains could move when osmotic pressure (e.g. sample injection) was imposed on the polymer matrix. This relaxation of the polymer dissipated the stress in the column. Our next advance was to use significantly lower concentrations of the linear polyacrylamide that the polymer could be automatically blown out after each run and replaced with fresh linear polymer solution. In this way, a new column was available for each analytical run. Finally, while testing many linear polymers, we selected linear polyacrylamide as the best matrix as it was the most hydrophilic polymer available. Under our DOE program, we demonstrated initially the success of the linear polyacrylamide to separate double strand DNA. We note that the method is used even today to assay purity of double stranded DNA fragments. Our focus, of course, was on the separation of single stranded DNA for sequencing purposes. In one paper, we demonstrated the success of our approach in sequencing up to 500 bases. Other application papers of sequencing up to this level were also published in the mid 1990's. A major interest of the sequencing community has always been read length. The longer the sequence read per run the more efficient the process as well as the ability to read repeat sequences. We therefore devoted a great deal of time to studying the factors influencing read length in capillary electrophoresis, including polymer type and molecule weight, capillary column temperature, applied electric field, etc. In our initial optimization, we were able to demonstrate, for the first time, the sequencing of over 1000 bases with 90% accuracy. The run required 80 minutes for separation. Sequencing of 1000 bases per column was next demonstrated on a multiple capillary instrument. Our studies revealed that linear polyacrylamide produced the longest read lengths because the hydrophilic single strand DNA had minimal interaction with the very hydrophilic linear polyacrylamide. Any interaction of the DNA with the polymer would lead to broader peaks and lower read length. Another important parameter was the molecular weight of the linear chains. High molecular weight (> 1 MDA) was important to allow the long single strand DNA to reptate through the entangled polymer matrix. In an important paper, we showed an inverse emulsion method to prepare reproducibility linear polyacrylamide polymer with an average MWT of 9MDa. This approach was used in the polymer for sequencing the human genome. Another critical factor in the successful use of capillary electrophoresis for sequencing was the sample preparation method. In the Sanger sequencing reaction, high concentration of salts and dideoxynucleotide remained. Since the sample was introduced to the capillary column by electrokinetic injection, these salt ions would be favorably injected into the column over the sequencing fragments, thus reducing the signal for longer fragments and hence reading read length. In two papers, we examined the role of individual components from the sequencing reaction and then developed a protocol to reduce the deleterious salts. We demonstrated a robust method for achieving long read length DNA sequencing. Continuing our advances, we next demonstrated the achievement of over 1000 bases in less than one hour with a base calling accuracy of between 98 and 99%. In this work, we implemented energy transfer dyes which allowed for cleaner differentiation of the 4 dye labeled terminal nucleotides. In addition, we developed improved base calling software to help read sequencing when the separation was only minimal as occurs at long read lengths. Another critical parameter we studied was column temperature. We demonstrated that read lengths improved as the column temperature was increased from room temperature to 60 C or 70 C. The higher temperature relaxed the DNA chains under the influence of the high electric field.« less
16 CFR 500.12 - Measurement of commodities by length and width, how expressed.
Code of Federal Regulations, 2013 CFR
2013-01-01
... square foot (929 cm2) be expressed in terms of length and width in linear measure. The customary inch... of 1 square foot (929 cm2) or more, but less than 4 square feet (37.1 dm2), be expressed in terms of... in square inches with length and width expressed in the largest whole unit (yard or foot) with any...
16 CFR 500.12 - Measurement of commodities by length and width, how expressed.
Code of Federal Regulations, 2011 CFR
2011-01-01
... square foot (929 cm2) be expressed in terms of length and width in linear measure. The customary inch... of 1 square foot (929 cm2) or more, but less than 4 square feet (37.1 dm2), be expressed in terms of... in square inches with length and width expressed in the largest whole unit (yard or foot) with any...
16 CFR 500.12 - Measurement of commodities by length and width, how expressed.
Code of Federal Regulations, 2014 CFR
2014-01-01
... square foot (929 cm2) be expressed in terms of length and width in linear measure. The customary inch... of 1 square foot (929 cm2) or more, but less than 4 square feet (37.1 dm2), be expressed in terms of... in square inches with length and width expressed in the largest whole unit (yard or foot) with any...
16 CFR 500.12 - Measurement of commodities by length and width, how expressed.
Code of Federal Regulations, 2012 CFR
2012-01-01
... square foot (929 cm2) be expressed in terms of length and width in linear measure. The customary inch... of 1 square foot (929 cm2) or more, but less than 4 square feet (37.1 dm2), be expressed in terms of... in square inches with length and width expressed in the largest whole unit (yard or foot) with any...
Linear growth trajectories in Zimbabwean infants12
Gough, Ethan K; Moodie, Erica EM; Prendergast, Andrew J; Ntozini, Robert; Moulton, Lawrence H; Humphrey, Jean H; Manges, Amee R
2016-01-01
Background: Undernutrition in early life underlies 45% of child deaths globally. Stunting malnutrition (suboptimal linear growth) also has long-term negative effects on childhood development. Linear growth deficits accrue in the first 1000 d of life. Understanding the patterns and timing of linear growth faltering or recovery during this period is critical to inform interventions to improve infant nutritional status. Objective: We aimed to identify the pattern and determinants of linear growth trajectories from birth through 24 mo of age in a cohort of Zimbabwean infants. Design: We performed a secondary analysis of longitudinal data from a subset of 3338 HIV-unexposed infants in the Zimbabwe Vitamin A for Mothers and Babies trial. We used k-means clustering for longitudinal data to identify linear growth trajectories and multinomial logistic regression to identify covariates that were associated with each trajectory group. Results: For the entire population, the mean length-for-age z score declined from −0.6 to −1.4 between birth and 24 mo of age. Within the population, 4 growth patterns were identified that were each characterized by worsening linear growth restriction but varied in the timing and severity of growth declines. In our multivariable model, 1-U increments in maternal height and education and infant birth weight and length were associated with greater relative odds of membership in the least–growth restricted groups (A and B) and reduced odds of membership in the more–growth restricted groups (C and D). Male infant sex was associated with reduced odds of membership in groups A and B but with increased odds of membership in groups C and D. Conclusion: In this population, all children were experiencing growth restriction but differences in magnitude were influenced by maternal height and education and infant sex, birth weight, and birth length, which suggest that key determinants of linear growth may already be established by the time of birth. This trial was registered at clinicaltrials.gov as NCT00198718. PMID:27806980
NASA Astrophysics Data System (ADS)
Eleftheriou, E.; Karatasos, K.
2012-10-01
Models of mixtures of peripherally charged dendrimers with oppositely charged linear polyelectrolytes in the presence of explicit solvent are studied by means of molecular dynamics simulations. Under the influence of varying strength of electrostatic interactions, these systems appear to form dynamically arrested film-like interconnected structures in the polymer-rich phase. Acting like a pseudo-thermodynamic inverse temperature, the increase of the strength of the Coulombic interactions drive the polymeric constituents of the mixture to a gradual dynamic freezing-in. The timescale of the average density fluctuations of the formed complexes initially increases in the weak electrostatic regime reaching a finite limit as the strength of electrostatic interactions grow. Although the models are overall electrically neutral, during this process the dendrimer/linear complexes develop a polar character with an excess charge mainly close to the periphery of the dendrimers. The morphological characteristics of the resulted pattern are found to depend on the size of the polymer chains on account of the distinct conformational features assumed by the complexed linear polyelectrolytes of different length. In addition, the length of the polymer chain appears to affect the dynamics of the counterions, thus affecting the ionic transport properties of the system. It appears, therefore, that the strength of electrostatic interactions together with the length of the linear polyelectrolytes are parameters to which these systems are particularly responsive, offering thus the possibility for a better control of the resulted structure and the electric properties of these soft-colloidal systems.
Jürimäe, Jaak; Haljaste, Kaja; Cicchella, Antonio; Lätt, Evelin; Purge, Priit; Leppik, Aire; Jürimäe, Toivo
2007-02-01
The purpose of this study was to examine the influence of the energy cost of swimming, body composition, and technical parameters on swimming performance in young swimmers. Twenty-nine swimmers, 15 prepubertal (11.9 +/- 0.3 years; Tanner Stages 1-2) and 14 pubertal (14.3 +/- 1.4 years; Tanner Stages 3-4) boys participated in the study. The energy cost of swimming (Cs) and stroking parameters were assessed over maximal 400-m front-crawl swimming in a 25-m swimming pool. The backward extrapolation technique was used to evaluate peak oxygen consumption (VO2peak). A stroke index (SI; m2 . s(-1) . cycles(-1)) was calculated by multiplying the swimming speed by the stroke length. VO2peak results were compared with VO2peak test in the laboratory (bicycle, 2.86 +/- 0.74 L/min, vs. in water, 2.53 +/- 0.50 L/min; R2 = .713; p = .0001). Stepwise-regression analyses revealed that SI (R2 = .898), in-water VO2peak (R2 = .358), and arm span (R2 = .454) were the best predictors of swimming performance. The backward-extrapolation method could be used to assess VO2peak in young swimmers. SI, arm span, and VO2peak appear to be the major determinants of front-crawl swimming performance in young swimmers.
Halo effective field theory constrains the solar 7Be + p → 8B + γ rate
Zhang, Xilin; Nollett, Kenneth M.; Phillips, D. R.
2015-11-06
In this study, we report an improved low-energy extrapolation of the cross section for the process 7Be(p,γ) 8B, which determines the 8B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy S-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of 8B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at E < 0.5 MeV. Most importantly, the S-factor atmore » zero energy is constrained to be S(0) = 21.3 ± 0.7 eV b, which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for S(0) obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for S(E).« less
Large-scale exact diagonalizations reveal low-momentum scales of nuclei
NASA Astrophysics Data System (ADS)
Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.
2018-03-01
Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.
Properties of knotted ring polymers. I. Equilibrium dimensions.
Mansfield, Marc L; Douglas, Jack F
2010-07-28
We report calculations on three classes of knotted ring polymers: (1) simple-cubic lattice self-avoiding rings (SARs), (2) "true" theta-state rings, i.e., SARs generated on the simple-cubic lattice with an attractive nearest-neighbor contact potential (theta-SARs), and (3) ideal, Gaussian rings. Extrapolations to large polymerization index N imply knot localization in all three classes of chains. Extrapolations of our data are also consistent with conjectures found in the literature which state that (1) R(g)-->AN(nu) asymptotically for ensembles of random knots restricted to any particular knot state, including the unknot; (2) A is universal across knot types for any given class of flexible chains; and (3) nu is equal to the standard self-avoiding walk (SAW) exponent (congruent with 0.588) for all three classes of chains (SARs, theta-SARs, and ideal rings). However, current computer technology is inadequate to directly sample the asymptotic domain, so that we remain in a crossover scaling regime for all accessible values of N. We also observe that R(g) approximately p(-0.27), where p is the "rope length" of the maximally inflated knot. This scaling relation holds in the crossover regime, but we argue that it is unlikely to extend into the asymptotic scaling regime where knots become localized.
Braga, Marina Vianna; Pinto, Zeneida Teixeira; de Carvalho Queiroz, Margareth Maria; Matsumoto, Nana; Blomquist, Gary James
2013-01-01
The external surface of all insects is covered by a species-specific complex mixture of highly stable, very long chain cuticular hydrocarbons (CHCs). Gas chromatography coupled to mass spectrometry was used to identify CHCs from four species of Sarcophagidae, Peckia (Peckia) chrysostoma, Peckia (Pattonella) intermutans, Sarcophaga (Liopygia) ruficornis and Sarcodexia lambens. The identified CHCs were mostly a mixture of n-alkanes, monomethylalkanes and dimethylalkanes with linear chain lengths varying from 23 to 33 carbons. Only two alkenes were found in all four species. S. lambens had a composition of CHCs with linear chain lengths varying from C23 to C33, while the other three species linear chain lengths from 24 to 31 carbons. n-Heptacosane, n-nonacosane and 3-methylnonacosane, n-triacontane and n-hentriacontane occurred in all four species. The results show that these hydrocarbon profiles may be used for the taxonomic differentiation of insect species and are a useful additional tool for taxonomic classification, especially when only parts of the insect specimen are available. PMID:23932943
Function approximation and documentation of sampling data using artificial neural networks.
Zhang, Wenjun; Barrion, Albert
2006-11-01
Biodiversity studies in ecology often begin with the fitting and documentation of sampling data. This study is conducted to make function approximation on sampling data and to document the sampling information using artificial neural network algorithms, based on the invertebrate data sampled in the irrigated rice field. Three types of sampling data, i.e., the curve species richness vs. the sample size, the curve rarefaction, and the curve mean abundance of newly sampled species vs.the sample size, are fitted and documented using BP (Backpropagation) network and RBF (Radial Basis Function) network. As the comparisons, The Arrhenius model, and rarefaction model, and power function are tested for their ability to fit these data. The results show that the BP network and RBF network fit the data better than these models with smaller errors. BP network and RBF network can fit non-linear functions (sampling data) with specified accuracy and don't require mathematical assumptions. In addition to the interpolation, BP network is used to extrapolate the functions and the asymptote of the sampling data can be drawn. BP network cost a longer time to train the network and the results are always less stable compared to the RBF network. RBF network require more neurons to fit functions and generally it may not be used to extrapolate the functions. The mathematical function for sampling data can be exactly fitted using artificial neural network algorithms by adjusting the desired accuracy and maximum iterations. The total numbers of functional species of invertebrates in the tropical irrigated rice field are extrapolated as 140 to 149 using trained BP network, which are similar to the observed richness.
Photoelectron spectroscopy of color centers in negatively charged cesium iodide nanocrystals
NASA Astrophysics Data System (ADS)
Sarkas, Harry W.; Kidder, Linda H.; Bowen, Kit H.
1995-01-01
We present the photoelectron spectra of negatively charged cesium iodide nanocrystals recorded using 2.540 eV photons. The species examined were produced using an inert gas condensation cluster ion source, and they ranged in size from (CsI)-n=13 to nanocrystal anions comprised of 330 atoms. Nanocrystals showing two distinct types of photoemission behavior were observed. For (CsI)-n=13 and (CsI)-n=36-165, a plot of cluster anion photodetachment threshold energies vs n-1/3 gives a straight line extrapolating (at n-1/3=0, i.e., n=∞) to 2.2 eV, the photoelectric threshold energy for F centers in bulk cesium iodide. The linear extrapolation of the cluster anion data to the corresponding bulk property implies that the electron localization in these gas-phase nanocrystals is qualitatively similar to that of F centers in extended alkali halide crystals. These negatively charged cesium iodide nanocrystals are thus shown to support embryonic forms of F centers, which mature with increasing cluster size toward condensed phase impurity centers. Under an alternative set of source conditions, nanocrystals were produced which showed significantly lower photodetachment thresholds than the aforementioned F-center cluster anions. For these species, containing 83-131 atoms, a plot of their cluster anion photodetachment threshold energies versus n-1/3 gives a straight line which extrapolates to 1.4 eV. This value is in accord with the expected photoelectric threshold energy for F' centers in bulk cesium iodide, i.e., color centers with two excess electrons in a single defect site. These nanocrystals are interpreted to be the embryonic F'-center containing species, Cs(CsI)-n=41-65.
Apparent-Strain Correction for Combined Thermal and Mechanical Testing
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; O'Neil, Teresa L.
2007-01-01
Combined thermal and mechanical testing requires that the total strain be corrected for the coefficient of thermal expansion mismatch between the strain gage and the specimen or apparent strain when the temperature varies while a mechanical load is being applied. Collecting data for an apparent strain test becomes problematic as the specimen size increases. If the test specimen cannot be placed in a variable temperature test chamber to generate apparent strain data with no mechanical loads, coupons can be used to generate the required data. The coupons, however, must have the same strain gage type, coefficient of thermal expansion, and constraints as the specimen to be useful. Obtaining apparent-strain data at temperatures lower than -320 F is challenging due to the difficulty to maintain steady-state and uniform temperatures on a given specimen. Equations to correct for apparent strain in a real-time fashion and data from apparent-strain tests for composite and metallic specimens over a temperature range from -450 F to +250 F are presented in this paper. Three approaches to extrapolate apparent-strain data from -320 F to -430 F are presented and compared to the measured apparent-strain data. The first two approaches use a subset of the apparent-strain curves between -320 F and 100 F to extrapolate to -430 F, while the third approach extrapolates the apparent-strain curve over the temperature range of -320 F to +250 F to -430 F. The first two approaches are superior to the third approach but the use of either of the first two approaches is contingent upon the degree of non-linearity of the apparent-strain curve.
Lampón, Natalia; Tutor-Crespo, María J; Romero, Rafael; Tutor, José C
2011-07-01
Recently, the use of the truncated area under the curve from 0 to 2 h (AUC(0-2)) of mycophenolic acid (MPA) has been proposed for therapeutic monitoring in liver transplant recipients. The aim of our study was the evaluation of the clinical usefulness of truncated AUC(0-2) in kidney transplant patients. Plasma MPA was measured in samples taken before the morning dose of mycophenolate mofetil, and one-half and 2 h post-dose, completing 63 MPA concentration-time profiles from 40 adult kidney transplant recipients. The AUC from 0 to 12 h (AUC(0-12)) was calculated using the validated algorithm of Pawinski et al. The truncated AUC(0-2) was calculated using the linear trapezoidal rule, and extrapolated to 0-12 h (trapezoidal extrapolated AUC(0-12)) as previously described. Algorithm calculated and trapezoidal extrapolated AUC(0-12) values showed high correlation (r=0.995) and acceptable dispersion (ma68=0.71 μg·h/mL), median prediction error (6.6%) and median absolute prediction error (12.6%). The truncated AUC(0-2) had acceptable diagnostic efficiency (87%) in the classification of subtherapeutic, therapeutic or supratherapeutic values with respect to AUC(0-12). However, due to the high inter-individual variation of the drug absorption-rate, the dispersion between both pharmacokinetic variables (ma68=6.9 μg·h/mL) was unacceptable. The substantial dispersion between truncated AUC(0-2) and AUC(0-12) values may be a serious objection for the routine use of MPA AUC(0-2) in clinical practice.
Solid H2 in the interstellar medium
NASA Astrophysics Data System (ADS)
Füglistaler, A.; Pfenniger, D.
2018-06-01
Context. Condensation of H2 in the interstellar medium (ISM) has long been seen as a possibility, either by deposition on dust grains or thanks to a phase transition combined with self-gravity. H2 condensation might explain the observed low efficiency of star formation and might help to hide baryons in spiral galaxies. Aims: Our aim is to quantify the solid fraction of H2 in the ISM due to a phase transition including self-gravity for different densities and temperatures in order to use the results in more complex simulations of the ISM as subgrid physics. Methods: We used molecular dynamics simulations of fluids at different temperatures and densities to study the formation of solids. Once the simulations reached a steady state, we calculated the solid mass fraction, energy increase, and timescales. By determining the power laws measured over several orders of magnitude, we extrapolated to lower densities the higher density fluids that can be simulated with current computers. Results: The solid fraction and energy increase of fluids in a phase transition are above 0.1 and do not follow a power law. Fluids out of a phase transition are still forming a small amount of solids due to chance encounters of molecules. The solid mass fraction and energy increase of these fluids are linearly dependent on density and can easily be extrapolated. The timescale is below one second, the condensation can be considered instantaneous. Conclusions: The presence of solid H2 grains has important dynamic implications on the ISM as they may be the building blocks for larger solid bodies when gravity is included. We provide the solid mass fraction, energy increase, and timescales for high density fluids and extrapolation laws for lower densities.
Can we detect a nonlinear response to temperature in European plant phenology?
Jochner, Susanne; Sparks, Tim H; Laube, Julia; Menzel, Annette
2016-10-01
Over a large temperature range, the statistical association between spring phenology and temperature is often regarded and treated as a linear function. There are suggestions that a sigmoidal relationship with definite upper and lower limits to leaf unfolding and flowering onset dates might be more realistic. We utilised European plant phenological records provided by the European phenology database PEP725 and gridded monthly mean temperature data for 1951-2012 calculated from the ENSEMBLES data set E-OBS (version 7.0). We analysed 568,456 observations of ten spring flowering or leafing phenophases derived from 3657 stations in 22 European countries in order to detect possible nonlinear responses to temperature. Linear response rates averaged for all stations ranged between -7.7 (flowering of hazel) and -2.7 days °C -1 (leaf unfolding of beech and oak). A lower sensitivity at the cooler end of the temperature range was detected for most phenophases. However, a similar lower sensitivity at the warmer end was not that evident. For only ∼14 % of the station time series (where a comparison between linear and nonlinear model was possible), nonlinear models described the relationship significantly better than linear models. Although in most cases simple linear models might be still sufficient to predict future changes, this linear relationship between phenology and temperature might not be appropriate when incorporating phenological data of very cold (and possibly very warm) environments. For these cases, extrapolations on the basis of linear models would introduce uncertainty in expected ecosystem changes.
The column strength of aluminum alloy 75S-T extruded shapes
NASA Technical Reports Server (NTRS)
Holt, Marshall; Leary, J R
1946-01-01
Because the tensile strength and tensile yield strength of alloy 75S-T are appreciably higher than those of the materials used in the tests leading to the use of the straight-line column curve, it appeared advisable to establish the curve of column strength by test rather than by extrapolation of relations determined empirically in the earlier tests. The object of this investigation was to determine the curve of column strength for extruded aluminum alloy 75S-T. In addition to three extruded shapes, a rolled-and-drawn round rod was included. Specimens of various lengths covering the range of effective slenderness ratios up to about 100 were tested.
NASA Technical Reports Server (NTRS)
Arenstorf, Norbert S.; Jordan, Harry F.
1987-01-01
A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
Tilt/Tip/Piston Manipulator with Base-Mounted Actuators
NASA Technical Reports Server (NTRS)
Tahmasebi, Farhad
2006-01-01
A proposed three-degree-of-freedom (tilt/tip/piston) manipulator, suitable for aligning an optical or mechanical component, would offer several advantages over prior such manipulators: Unlike in some other manipulators, no actuator would support the weight of another actuator: All of the actuators would be mounted on a base. Hence, there would be less manipulated weight. The basic geometry of the manipulator would afford mechanical advantage: that is, actuator motions would be larger than the motions they produce in the manipulated object. Mechanical advantage inherently increases the accuracy and resolution of manipulation. Unlike in some other manipulators, it would not be necessary to route power and/or data lines through manipulator joints. The proposed manipulator (see figure) would include three prismatic actuators (T1N1, T2N2, and T3N3) mounted on the base and operating in the same plane. Examples of suitable prismatic actuators include lead-screw mechanisms, linear hydraulic motors, piezoelectric linear drives, inchworm-movement linear stepping motors, and linear flexure drives. The actuators would control the lengths of links R1T1, R2T2, and R3T3. Three spherical joints (P1, P2, and P3) would be located at the corners of an equilateral triangle of side length q on the platform holding the object to be manipulated. Three inextensible limbs (R1P1, R2P2, and R3P3) having length r would connect the spherical joints on the platform to revolute joints (R1, R2, and R3) at the ends of the actuator-controlled links R1T1, R2T2, and R3T3. By varying the lengths of these links, one could control the tilt, tip, and piston coordinates of the platform. Closed-form equations for direct or forward kinematics of the manipulator (given the lengths of the variable links, find the tilt, tip, and piston coordinates) have been derived. The equations of inverse kinematics (find the variable link lengths needed to obtain the desired tilt, tip, and piston coordinates) have also been derived.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
NASA Astrophysics Data System (ADS)
Santos, J. T.; Holz, T.; Fernandes, A. J. S.; Costa, F. M.; Chu, V.; Conde, J. P.
2015-02-01
Diamond-based microelectromechanical resonators have the potential of enhanced performance due to the chemical inertness of the diamond structural layer and its high Young’s modulus, high wear resistance, low thermal expansion coefficient, and very high thermal conductivity. In this work, the resonance frequency and quality factor of MEMS resonators based on nanocrystalline diamond films are characterized under different air pressures. The dynamic behavior of 50-300 μm long linear bridges and double ended tuning forks, with resonance frequencies between 0.5 and 15 MHz and quality factors as high as 50 000 are described as a function of measurement pressure from high vacuum(~10 mTorr) up to atmospheric conditions. The resonance frequencies and quality factors in vacuum show good agreement with the theoretical models including anchor and thermoelastic dissipation (TED). The Young’s moduli for nanocrystalline diamond films extrapolated from experimental data are between 840-920 GPa. The critical pressure values, at which the quality factor starts decreasing due to dissipation in air, are dependent on the resonator length. Longer structures, with quality factors limited by TED and lower resonance frequencies, have low critical pressures, of the order of 1-10 Torr and go from an intrinsic dissipation, to a molecular dissipation regime and finally to a region of viscous dissipation. Shorter resonators, with higher resonance frequencies and quality factors limited by anchor losses, have higher critical pressures, some higher than atmospheric pressure, and enter directly into the viscous dissipation regime from the intrinsic region.
Marn, Nina; Klanjscek, Tin; Stokes, Lesley; Jusup, Marko
2015-01-01
Sea turtles face threats globally and are protected by national and international laws. Allometry and scaling models greatly aid sea turtle conservation and research, and help to better understand the biology of sea turtles. Scaling, however, may differ between regions and/or life stages. We analyze differences between (i) two different regional subsets and (ii) three different life stage subsets of the western North Atlantic loggerhead turtles by comparing the relative growth of body width and depth in relation to body length, and discuss the implications. Results suggest that the differences between scaling relationships of different regional subsets are negligible, and models fitted on data from one region of the western North Atlantic can safely be used on data for the same life stage from another North Atlantic region. On the other hand, using models fitted on data for one life stage to describe other life stages is not recommended if accuracy is of paramount importance. In particular, young loggerhead turtles that have not recruited to neritic habitats should be studied and modeled separately whenever practical, while neritic juveniles and adults can be modeled together as one group. Even though morphometric scaling varies among life stages, a common model for all life stages can be used as a general description of scaling, and assuming isometric growth as a simplification is justified. In addition to linear models traditionally used for scaling on log-log axes, we test the performance of a saturating (curvilinear) model. The saturating model is statistically preferred in some cases, but the accuracy gained by the saturating model is marginal.
Johnson, Zachary C.; Snyder, Craig D.; Hitt, Nathaniel P.
2017-01-01
Headwater stream responses to climate change will depend in part on groundwater‐surface water exchanges. We used linear modeling techniques to partition likely effects of shallow groundwater seepage and air temperature on stream temperatures for 79 sites in nine focal watersheds using hourly air and water temperature measurements collected during summer months from 2012 to 2015 in Shenandoah National Park, Virginia, USA. Shallow groundwater effects exhibited more variation within watersheds than between them, indicating the importance of reach‐scale assessments and the limited capacity to extrapolate upstream groundwater influences from downstream measurements. Boosted regression tree (BRT) models revealed intricate interactions among geomorphological landform features (stream slope, elevation, network length, contributing area, and channel confinement) and seasonal precipitation patterns (winter, spring, and summer months) that together were robust predictors of spatial and temporal variation in groundwater influence on stream temperatures. The final BRT model performed well for training data and cross‐validated samples (correlation = 0.984 and 0.760, respectively). Geomorphological and precipitation predictors of groundwater influence varied in their importance between watersheds, suggesting differences in spatial and temporal controls of recharge dynamics and the depth of the groundwater source. We demonstrate an application of the final BRT model to predict groundwater effects from landform and precipitation covariates at 1075 new sites distributed at 100 m increments within focal watersheds. Our study provides a framework to estimate effects of groundwater seepage on stream temperature in unsampled locations. We discuss applications for climate change research to account for groundwater‐surface water interactions when projecting future thermal thresholds for stream biota.
NASA Astrophysics Data System (ADS)
Johnson, Zachary C.; Snyder, Craig D.; Hitt, Nathaniel P.
2017-07-01
Headwater stream responses to climate change will depend in part on groundwater-surface water exchanges. We used linear modeling techniques to partition likely effects of shallow groundwater seepage and air temperature on stream temperatures for 79 sites in nine focal watersheds using hourly air and water temperature measurements collected during summer months from 2012 to 2015 in Shenandoah National Park, Virginia, USA. Shallow groundwater effects exhibited more variation within watersheds than between them, indicating the importance of reach-scale assessments and the limited capacity to extrapolate upstream groundwater influences from downstream measurements. Boosted regression tree (BRT) models revealed intricate interactions among geomorphological landform features (stream slope, elevation, network length, contributing area, and channel confinement) and seasonal precipitation patterns (winter, spring, and summer months) that together were robust predictors of spatial and temporal variation in groundwater influence on stream temperatures. The final BRT model performed well for training data and cross-validated samples (correlation = 0.984 and 0.760, respectively). Geomorphological and precipitation predictors of groundwater influence varied in their importance between watersheds, suggesting differences in spatial and temporal controls of recharge dynamics and the depth of the groundwater source. We demonstrate an application of the final BRT model to predict groundwater effects from landform and precipitation covariates at 1075 new sites distributed at 100 m increments within focal watersheds. Our study provides a framework to estimate effects of groundwater seepage on stream temperature in unsampled locations. We discuss applications for climate change research to account for groundwater-surface water interactions when projecting future thermal thresholds for stream biota.
Is there a correlation between right bronchus length and diameter with age?
Otoch, José Pinhata; Minamoto, Hélio; Perini, Marcos; Carneiro, Fred Olavo; de Almeida Artifon, Everson Luiz
2013-06-01
Right main bronchial anatomy knowledge is essential to guide endoscopic stent placement in modern era. The aim is to describe right bronchial anatomy, cross-area and its relation with the right pulmonary artery and patient's age. One hundred thirty four cadaveric specimens were studied after approval by the Research and Ethics Committee at the University of São Paulo Medical School and Medical Forensic Institute of São Paulo. All necropsies were performed in natura after 24 hours of death and patients with previous pulmonary disease were excluded. Landmarks to start measurement were the first tracheal ring, vertex of carina, first right bronchial ring, and right pulmonary artery area over the right main bronchus. After mobilization, the specimens were measured using a caliper and measurement of distances was recorded in centimeters at landmarks points. All the measures (distances, cross sectional area and planes) were performed by three independent observers and recorded as mean, standard error and ranges. Student t test was used to compare means and linear regression was applied to correlate the measurements. From 134 specimens studied, 34 were excluded (10 with previous history of pulmonary diseases, surgery or deformities and 24 of female gender). Linear regression showed proportionality between tracheal length and right bronchus length; with the area at first tracheal ring and carina and also between the cross sectional area at these points. Linear regression analysis between tracheal length and age (R=0.593 P<0.005), right bronchus length and age (R=0.523, P<0.005), area of contact between right bronchus and right pulmonary artery and age (R=0.35, P<0.005). We can conclude that large airways grow progressively with increasing age in male gender. There was a direct correlation between age and tracheal length; as has age and right bronchus length. There was a direct correlation between age and the area of the right bronchus covered by the right pulmonary artery.
Slip accumulation and lateral propagation of active normal faults in Afar
NASA Astrophysics Data System (ADS)
Manighetti, I.; King, G. C. P.; Gaudemer, Y.; Scholz, C. H.; Doubre, C.
2001-01-01
We investigate fault growth in Afar, where normal fault systems are known to be currently growing fast and most are propagating to the northwest. Using digital elevation models, we have examined the cumulative slip distribution along 255 faults with lengths ranging from 0.3 to 60 km. Faults exhibiting the elliptical or "bell-shaped" slip profiles predicted by simple linear elastic fracture mechanics or elastic-plastic theories are rare. Most slip profiles are roughly linear for more than half of their length, with overall slopes always <0.035. For the dominant population of NW striking faults and fault systems longer than 2 km, the slip profiles are asymmetric, with slip being maximum near the eastern ends of the profiles where it drops abruptly to zero, whereas slip decreases roughly linearly and tapers in the direction of overall Aden rift propagation. At a more detailed level, most faults appear to be composed of distinct, shorter subfaults or segments, whose slip profiles, while different from one to the next, combine to produce the roughly linear overall slip decrease along the entire fault. On a larger scale, faults cluster into kinematically coupled systems, along which the slip on any scale individual fault or fault system complements that of its neighbors, so that the total slip of the whole system is roughly linearly related to its length, with an average slope again <0.035. We discuss the origin of these quasilinear, asymmetric profiles in terms of "initiation points" where slip starts, and "barriers" where fault propagation is arrested. In the absence of a barrier, slip apparently extends with a roughly linear profile, tapered in the direction of fault propagation.
The Dynamics of Entangled DNA Networks using Single-Molecule Methods
NASA Astrophysics Data System (ADS)
Chapman, Cole David
Single molecule experiments were performed on DNA, a model polymer, and entangled DNA networks to explore diffusion within complex polymeric fluids and their linear and non-linear viscoelasticity. DNA molecules of varying length and topology were prepared using biological methods. An ensemble of individual molecules were then fluorescently labeled and tracked in blends of entangled linear and circular DNA to examine the dependence of diffusion on polymer length, topology, and blend ratio. Diffusion was revealed to possess a non-monotonic dependence on the blend ratio, which we believe to be due to a second-order effect where the threading of circular polymers by their linear counterparts greatly slows the mobility of the system. Similar methods were used to examine the diffusive and conformational behavior of DNA within highly crowded environments, comparable to that experienced within the cell. A previously unseen gamma distributed elongation of the DNA in the presence of crowders, proposed to be due to entropic effects and crowder mobility, was observed. Additionally, linear viscoelastic properties of entangled DNA networks were explored using active microrheology. Plateau moduli values verified for the first time the predicted independence from polymer length. However, a clear bead-size dependence was observed for bead radii less than ~3x the tube radius, a newly discovered limit, above which microrheology results are within the continuum limit and may access the bulk properties of the fluid. Furthermore, the viscoelastic properties of entangled DNA in the non-linear regime, where the driven beads actively deform the network, were also examined. By rapidly driving a bead through the network utilizing optical tweezers, then removing the trap and tracking the bead's subsequent motion we are able to model the system as an over-damped harmonic oscillator and find the elasticity to be dominated by stress-dependent entanglements.
Large structures and tethers working group
NASA Technical Reports Server (NTRS)
Murphy, G.; Garrett, H.; Samir, U.; Barnett, A.; Raitt, J.; Sullivan, J.; Katz, I.
1986-01-01
The Large Structures and Tethers Working Group sought to clarify the meaning of large structures and tethers as they related to space systems. Large was assumed to mean that the characteristic length of the structure was greater than one of such relevant plasma characteristics as ion gyroradius or debey length. Typically, anything greater than or equal to the Shuttle dimensions was considered large. It was agreed that most large space systems that the tether could be better categorized as extended length, area, or volume structures. The key environmental interactions were then identified in terms of these three categories. In the following Working Group summary, these categories and the related interactions are defined in detail. The emphasis is on how increases in each of the three spatial dimensions uniquely determine the interactions with the near-Earth space environment. Interactions with the environments around the other planets and the solar wind were assumed to be similar or capable of being extrapolated from the near-Earth results. It should be remembered in the following that the effects on large systems do not just affect specific technologies but will quite likely impact whole missions. Finally, the possible effects of large systems on the plasma environment, although only briefly discussed, were felt to be of potentially great concern.
1 Kw Arc-Jet Engine: Experiments With Argon
2004-06-23
3 s- 6 ) R + R ( non-linear) FLAME STABILITY CHAMBER PRESSURE 1.0 - 1.625 atm VACUUM PRESSURE 30 – 30 mmHg FLAME LENGTH 28 – 33 mm CHAMBER...PRESSURE 2.25 – 2.875 atm VACUUM PRESSURE 30 – 40 mmHg FLAME LENGTH 36 – 42 mm CHAMBER PRESSURE 3.0 – 3.0 atm VACUUM PRESSURE 60 – 36 mmHg FLAME LENGTH 18
Long Coherence Length 193 nm Laser for High-Resolution Nano-Fabrication
2008-06-27
in the non-linear optical up-converter, as well as specifying their interaction lengths, phase -matching angles, coatings, temperatures of operation...when optical path differences between interfering beams become comparable to the temporal coherence length of the source, the fringe contrast diminishes...switched, intracavity frequency doubled Nd:YAG laser drives an optical parametric oscillator (OPO) running at 710 nm. A portion of the 532 nm light
Improved Method for Linear B-Cell Epitope Prediction Using Antigen’s Primary Sequence
Raghava, Gajendra P. S.
2013-01-01
One of the major challenges in designing a peptide-based vaccine is the identification of antigenic regions in an antigen that can stimulate B-cell’s response, also called B-cell epitopes. In the past, several methods have been developed for the prediction of conformational and linear (or continuous) B-cell epitopes. However, the existing methods for predicting linear B-cell epitopes are far from perfection. In this study, an attempt has been made to develop an improved method for predicting linear B-cell epitopes. We have retrieved experimentally validated B-cell epitopes as well as non B-cell epitopes from Immune Epitope Database and derived two types of datasets called Lbtope_Variable and Lbtope_Fixed length datasets. The Lbtope_Variable dataset contains 14876 B-cell epitope and 23321 non-epitopes of variable length where as Lbtope_Fixed length dataset contains 12063 B-cell epitopes and 20589 non-epitopes of fixed length. We also evaluated the performance of models on above datasets after removing highly identical peptides from the datasets. In addition, we have derived third dataset Lbtope_Confirm having 1042 epitopes and 1795 non-epitopes where each epitope or non-epitope has been experimentally validated in at least two studies. A number of models have been developed to discriminate epitopes and non-epitopes using different machine-learning techniques like Support Vector Machine, and K-Nearest Neighbor. We achieved accuracy from ∼54% to 86% using diverse s features like binary profile, dipeptide composition, AAP (amino acid pair) profile. In this study, for the first time experimentally validated non B-cell epitopes have been used for developing method for predicting linear B-cell epitopes. In previous studies, random peptides have been used as non B-cell epitopes. In order to provide service to scientific community, a web server LBtope has been developed for predicting and designing B-cell epitopes (http://crdd.osdd.net/raghava/lbtope/). PMID:23667458
Fulfer, K D; Kuroda, D G
2017-09-20
The structure and dynamics of electrolytes composed of lithium hexafluorophosphate (LiPF 6 ) in dimethyl carbonate, ethyl methyl carbonate, and diethyl carbonate were investigated using a combination of linear and two-dimensional infrared spectroscopies. The solutions studied here have a LiPF 6 concentration of X(LiPF 6 ) = 0.09, which is typically found in commercial lithium ion batteries. This study focuses on comparing the differences in the solvation shell structure and dynamics produced by linear organic carbonates of different alkyl chain lengths. The IR experiments show that either linear carbonate forms a tetrahedral solvation shell (coordination number of 4) around the lithium ion irrespective of whether the solvation shell has anions in close proximity to the carbonates. Moreover, analysis of the absorption cross sections via FTIR and DFT computations reveals a distortion in the angle formed by Li + -O[double bond, length as m-dash]C which decreases from the expected 180° when the alkyl chains of the carbonate are lengthened. In addition, our findings also reveal that, likely due to its asymmetric structure, ethyl methyl carbonate has a significantly more distorted tetrahedral lithium ion solvation shell than either of the other two investigated carbonates. IR photon echo studies further demonstrate that the motions of the solvation shell have a time scale of a few picoseconds for all three linear carbonates. Interestingly, a slowdown of the in place-motions of the first solvation shell is observed when the carbonate has a longer alkyl chain length irrespective of the symmetry. In addition, vibrational energy transfer with a time scale of tens of picoseconds is observed between strongly coupled modes arising from the solvation shell structure of the Li + which corroborates the modeling of these solvation shells in terms of highly coupled vibrational states. Results of this study provide new insights into the molecular structure and dynamics of the lithium ion electrolyte components as a function of solvent structure.
Ma, Qiuyun; Jiao, Yan; Ren, Yiping
2017-01-01
In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.
Gunathilaka, P A D H N; Uduwawala, U M H U; Udayanga, N W B A L; Ranathunge, R M T B; Amarasinghe, L D; Abeyewickreme, W
2017-11-23
Larval diet quality and rearing conditions have a direct and irreversible effect on adult traits. Therefore, the current study was carried out to optimize the larval diet for mass rearing of Aedes aegypti, for Sterile Insect Technique (SIT)-based applications in Sri Lanka. Five batches of 750 first instar larvae (L 1) of Ae. aegypti were exposed to five different concentrations (2-10%) of International Atomic Energy Agency (IAEA) recommended the larval diet. Morphological development parameters of larva, pupa, and adult were detected at 24 h intervals along with selected growth parameters. Each experiment was replicated five times. General Linear Modeling along with Pearson's correlation analysis were used for statistical treatments. Significant differences (P < 0.05) among the larvae treated with different concentrations were found using General Linear Modeling in all the stages namely: total body length and the thoracic length of larvae; cephalothoracic length and width of pupae; thoracic length, thoracic width, abdominal length and the wing length of adults; along with pupation rate and success, sex ratio, adult success, fecundity and hatching rate of Ae. aegypti. The best quality adults can be produced at larval diet concentration of 10%. However, the 8% larval diet concentration was most suitable for adult male survival.
Head-on collisions of unequal mass black holes in D=5 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witek, Helvi; Cardoso, Vitor; Department of Physics and Astronomy, University of Mississippi, University, Mississippi 38677
We study head-on collisions of unequal mass black hole binaries in D=5 spacetime dimensions, with mass ratios between 1:1 and 1:4. Information about gravitational radiation is extracted by using the Kodama-Ishibashi gauge-invariant formalism and details of the apparent horizon of the final black hole. We present waveforms, total integrated energy and momentum for this process. Our results show surprisingly good agreement, within 5% or less, with those extrapolated from linearized, point-particle calculations. Our results also show that consistency with the area theorem bound requires that the same process in a large number of spacetime dimensions must display new features.
Lattice QCD results for the HVP contribution to the anomalous magnetic moments of leptons
NASA Astrophysics Data System (ADS)
2018-03-01
We present lattice QCD results by the Budapest-Marseille-Wuppertal (BMW) Collaboration for the leading-order contribution of the hadron vacuum polarization (LOHVP) to the anomalous magnetic moments of all charged leptons. Calculations are performed with u, d, s and c quarks at their physical masses, in volumes of linear extent larger than 6 fm, and at six values of the lattice spacing, allowing for controlled continuum extrapolations. All connected and disconnected contributions are calculated for not only the muon but also the electron and tau anomalous magnetic moments. Systematic uncertainties are thoroughly discussed and comparisons with other calculations and phenomenological estimates are made.
NASA Technical Reports Server (NTRS)
Stassinopoulos, E. G.
1972-01-01
Vehicle encountered electron and proton fluxes were calculated for a set of nominal UK-5 trajectories with new computational methods and new electron environment models. Temporal variations in the electron data were considered and partially accounted for. Field strength calculations were performed with an extrapolated model on the basis of linear secular variation predictions. Tabular maps for selected electron and proton energies were constructed as functions of latitude and longitude for specified altitudes. Orbital flux integration results are presented in graphical and tabular form; they are analyzed, explained, and discussed.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Thyroid Patient Salivary Radioiodine Transit and Dysfunction Assessment Using Chewing Gums.
Okkalides, Demetrios
2016-11-01
Radiation-induced salivary gland dysfunction is the most frequent side-effect of I-131 thyroid therapy. Here, a novel saliva sampling method with ordinary chewing gums administered to the patients at appropriate time intervals post-treatment (TIPT) was used to relate this effect to chewing gum saliva activity (CGSA) content. Saliva samples were acquired after the oral administration of prescribed I-131 activity (radioactivity administered [RA]) to 19 differentiated thyroid cancer (DTC) and 16 hyperthyroidism patients of the radioisotope unit (RIU) during 2014 and 2015. The error of this saliva collecting process was found to be 1.2%-2.05%, and so, the method was considered satisfactory. For each patient, the CGSA was plotted against the TIPT producing a curve, R(t). On this, two functions were fitted: a linear on the first few rising data points and a gamma variate over the peak of the R(t). From these, several parameters related to the radioactivity oral transit were calculated and the total radioactivity administered (TRA) during all past treatments of each patient was obtained from RIU records. The patients were asked to report any swelling, dry mouth, taste-smell change, or pain and were graded as a morbidity score (MS) describing the quality of life of each. The peak radioactivity in the saliva samples, R max , was found to be proportional to RA and was plotted against the CGSA extrapolated at 24 and 36 hours. The linear fits produced were used to estimate the salivary glands' activity average effective half-life (16.3 hours). The MS of DTC patients was found to depend linearly both on R max and TRA (MS = 0.0032 × R max - 0.7107 and MS = 0.1862 × TRA +0.66, respectively). Both lines were used to extrapolate symptom thresholds. The measurement of R max in DTC patients proved very useful for individualized radiation protection, and the dependence of MS on TRA should be used when additional treatments are considered for repeat DTC patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Subeihi, Ala' A.A., E-mail: ala.alsubeihi@wur.nl; BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority; Spenkelink, Bert
2012-05-01
This study defines a physiologically based kinetic (PBK) model for methyleugenol (ME) in human based on in vitro and in silico derived parameters. With the model obtained, bioactivation and detoxification of methyleugenol (ME) at different doses levels could be investigated. The outcomes of the current model were compared with those of a previously developed PBK model for methyleugenol (ME) in male rat. The results obtained reveal that formation of 1′-hydroxymethyleugenol glucuronide (1′HMEG), a major metabolic pathway in male rat liver, appears to represent a minor metabolic pathway in human liver whereas in human liver a significantly higher formation of 1′-oxomethyleugenolmore » (1′OME) compared with male rat liver is observed. Furthermore, formation of 1′-sulfooxymethyleugenol (1′HMES), which readily undergoes desulfonation to a reactive carbonium ion (CA) that can form DNA or protein adducts (DA), is predicted to be the same in the liver of both human and male rat at oral doses of 0.0034 and 300 mg/kg bw. Altogether despite a significant difference in especially the metabolic pathways of the proximate carcinogenic metabolite 1′-hydroxymethyleugenol (1′HME) between human and male rat, the influence of species differences on the ultimate overall bioactivation of methyleugenol (ME) to 1′-sulfooxymethyleugenol (1′HMES) appears to be negligible. Moreover, the PBK model predicted the formation of 1′-sulfooxymethyleugenol (1′HMES) in the liver of human and rat to be linear from doses as high as the benchmark dose (BMD{sub 10}) down to as low as the virtual safe dose (VSD). This study shows that kinetic data do not provide a reason to argue against linear extrapolation from the rat tumor data to the human situation. -- Highlights: ► A PBK model is made for bioactivation and detoxification of methyleugenol in human. ► Comparison to the PBK model in male rat revealed species differences. ► PBK results support linear extrapolation from high to low dose and from rat to human.« less
50 CFR 29.21-2 - Application procedures.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) State of local governments or agencies or instrumentalities thereof except as to rights-of-way... schedule: (A) For linear facilities (e.g., powerlines, pipelines, roads, etc.). Length Payment Less than 5... application includes both linear and nonlinear facilities, payment will be the aggregate of amounts under...
50 CFR 29.21-2 - Application procedures.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) State of local governments or agencies or instrumentalities thereof except as to rights-of-way... schedule: (A) For linear facilities (e.g., powerlines, pipelines, roads, etc.). Length Payment Less than 5... application includes both linear and nonlinear facilities, payment will be the aggregate of amounts under...
Linear motion feed through with thin wall rubber sealing element
NASA Astrophysics Data System (ADS)
Mikhailov, V. P.; Deulin, E. A.
2017-07-01
The patented linear motion feedthrough is based on elastic thin rubber walls usage being reinforced with analeptic string fixed in the middle part of the walls. The pneumatic or hydro actuators create linear movement of stock. The length of this movement is two times more the rubber wall length. This flexible wall is a sealing element of feedthrough. The main advantage of device is negligible resistance force that is less then mentioned one in sealing bellows that leads to positioning error decreasing. Nevertheless, the thin wall rubber sealing element (TRE) of the feedthrough is the main unreliable element that was the reason of this element longevity research. The theory and experimental results help to create equation for TRE longevity calculation under vacuum or extra high pressure difference action. The equation was used for TRE longevity determination for hydraulic or vacuum equipment realization also as it helps for gas flow being leaking through the cracks in thin walls of rubber sealing element of linear motion feedthrough calculation.
[A study of magnetic shielding design for a magnetic resonance imaging linac system].
Zhang, Zheshun; Chen, Wenjing; Qiu, Yang; Zhu, Jianming
2017-12-01
One of the main technical challenges when integrating magnetic resonance imaging (MRI) systems with medical linear accelerator is the strong interference of fringe magnetic fields from the MRI system with the electron beams of linear accelerator, making the linear accelerator not to work properly. In order to minimize the interference of magnetic fields, a magnetic shielding cylinder with an open structure made of high permeability materials is designed. ANSYS Maxwell was used to simulate Helmholtz coil which generate uniform magnetic field instead of the fringe magnetic fields which affect accelerator gun. The parameters of shielding tube, such as permeability, radius, length, side thickness, bottom thickness and fringe magnetic fields strength are simulated, and the data is processed by MATLAB to compare the shielding performance. This article gives out a list of magnetic shielding effectiveness with different side thickness and bottom thickness under the optimal radius and length, which showes that this design can meet the shielding requirement for the MRI-linear accelerator system.
X -band rf driven free electron laser driver with optics linearization
Sun, Yipeng; Emma, Paul; Raubenheimer, Tor; ...
2014-11-13
In this paper, a compact hard X-ray free electron lasers (FEL) design is proposed with all X-band rf acceleration and two stage bunch compression. It eliminates the need of a harmonic rf linearization section by employing optics linearization in its first stage bunch compression. Quadrupoles and sextupoles are employed in a bunch compressor one (BC1) design, in such a way that second order longitudinal dispersion of BC1 cancels the second order energy correlation in the electron beam. Start-to-end 6-D simulations are performed with all the collective effects included. Emittance growth in the horizontal plane due to coherent synchrotron radiation ismore » investigated and minimized, to be on a similar level with the successfully operating Linac coherent light source (LCLS). At a FEL radiation wavelength of 0.15 nm, a saturation length of 40 meters can be achieved by employing an undulator with a period of 1.5 cm. Without tapering, a FEL radiation power above 10 GW is achieved with a photon pulse length of 50 fs, which is LCLS-like performance. The overall length of the accelerator plus undulator is around 250 meters which is much shorter than the LCLS length of 1230 meters. That makes it possible to build hard X-ray FEL in a laboratory with limited size.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing
2016-06-28
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less
Identification of Piecewise Linear Uniform Motion Blur
NASA Astrophysics Data System (ADS)
Patanukhom, Karn; Nishihara, Akinori
A motion blur identification scheme is proposed for nonlinear uniform motion blurs approximated by piecewise linear models which consist of more than one linear motion component. The proposed scheme includes three modules that are a motion direction estimator, a motion length estimator and a motion combination selector. In order to identify the motion directions, the proposed scheme is based on a trial restoration by using directional forward ramp motion blurs along different directions and an analysis of directional information via frequency domain by using a Radon transform. Autocorrelation functions of image derivatives along several directions are employed for estimation of the motion lengths. A proper motion combination is identified by analyzing local autocorrelation functions of non-flat component of trial restored results. Experimental examples of simulated and real world blurred images are given to demonstrate a promising performance of the proposed scheme.
Linear Distributed GaN MMIC Power Amplifier with Improved Power-added Efficiency
2017-03-01
Laboratories, 3011 Malibu Canyon Road, Malibu, CA 90265 Abstract: We report on a multi-octave (100 MHz ‒ 8 GHz), linear nonuniform distributed...amplifier (NDPA) in a MMIC architecture using scaled 120-nm short-gate- length GaN HEMTs. The linear NDPAs were built with six sections in a nonuniform ...MHz ‒ 8 GHz) GaN MMIC nonuniform distributed amplifier (NDPA) with built-in linearization and a gm3 cancellation method in class A and class C
Radar orthogonality and radar length in Finsler and metric spacetime geometry
NASA Astrophysics Data System (ADS)
Pfeifer, Christian
2014-09-01
The radar experiment connects the geometry of spacetime with an observers measurement of spatial length. We investigate the radar experiment on Finsler spacetimes which leads to a general definition of radar orthogonality and radar length. The directions radar orthogonal to an observer form the spatial equal time surface an observer experiences and the radar length is the physical length the observer associates to spatial objects. We demonstrate these concepts on a forth order polynomial Finsler spacetime geometry which may emerge from area metric or premetric linear electrodynamics or in quantum gravity phenomenology. In an explicit generalization of Minkowski spacetime geometry we derive the deviation from the Euclidean spatial length measure in an observers rest frame explicitly.
Chemical Reactions in Turbulent Mixing Flows.
1986-06-15
length from Reynolds and Schmidt numbers at high Reynolds number, 2. the linear dependence of flame length on the stoichiometric mixture ratio, and, 3...processes are unsteady and the observed large scale flame length fluctuations are the best evidence of the individual cascade. A more detailed examination...Damk~hler number. When the same ideas are used in a model of fuel jets burning in air, it explains (Broadwell 1982): 1. the independence of flame
Chemical Reactions in Turbulent Mixing Flows
1992-07-01
Chemically-Reacting, Gas-Phase Turbulent Jets (Gilbrech 1991), that explored Reynolds number effects on turbulent flame length and the influence of...and asymptotes to a constant value beyond the flame tip. The main result of the work is that the flame length , as estimated from the temperature...8217. Specifically, the normalized flame length Lf/d* displays a linear dependence on the stoichiometric mixture ratio 0, with a slope that decreases from Re "• 1.0
NASA Technical Reports Server (NTRS)
Hada, Megumi; George, Kerry A.; Cucinotta, F. A.
2011-01-01
The relationship between biological effects and low doses of absorbed radiation is still uncertain, especially for high LET radiation exposure. Estimates of risks from low-dose and low-dose-rates are often extrapolated using data from Japanese atomic bomb survivor with either linear or linear quadratic models of fit. In this study, chromosome aberrations were measured in human peripheral blood lymphocytes and normal skin fibroblasts cells after exposure to very low dose (.01 - 0.2 Gy) of 170 MeV/u Si-28-ions or 600 MeV/u Fe-56-ions. Chromosomes were analyzed using the whole chromosome fluorescence in situ hybridization (FISH) technique during the first cell division after irradiation, and chromosome aberrations were identified as either simple exchanges (translocations and dicentrics) or complex exchanges (involving >2 breaks in 2 or more chromosomes). The curves for doses above 0.1 Gy were more than one ion traverses a cell showed linear dose responses. However, for doses less than 0.1 Gy, Si-28-ions showed no dose response, suggesting a non-targeted effect when less than one ion traversal occurs. Additional findings for Fe-56 will be discussed.
The solution of the point kinetics equations via converged accelerated Taylor series (CATS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.; Picca, P.; Previti, A.
This paper deals with finding accurate solutions of the point kinetics equations including non-linear feedback, in a fast, efficient and straightforward way. A truncated Taylor series is coupled to continuous analytical continuation to provide the recurrence relations to solve the ordinary differential equations of point kinetics. Non-linear (Wynn-epsilon) and linear (Romberg) convergence accelerations are employed to provide highly accurate results for the evaluation of Taylor series expansions and extrapolated values of neutron and precursor densities at desired edits. The proposed Converged Accelerated Taylor Series, or CATS, algorithm automatically performs successive mesh refinements until the desired accuracy is obtained, making usemore » of the intermediate results for converged initial values at each interval. Numerical performance is evaluated using case studies available from the literature. Nearly perfect agreement is found with the literature results generally considered most accurate. Benchmark quality results are reported for several cases of interest including step, ramp, zigzag and sinusoidal prescribed insertions and insertions with adiabatic Doppler feedback. A larger than usual (9) number of digits is included to encourage honest benchmarking. The benchmark is then applied to the enhanced piecewise constant algorithm (EPCA) currently being developed by the second author. (authors)« less
Sonic Boom Prediction and Minimization of the Douglas Reference OPT5 Configuration
NASA Technical Reports Server (NTRS)
Siclari, Michael J.
1999-01-01
Conventional CFD methods and grids do not yield adequate resolution of the complex shock flow pattern generated by a real aircraft geometry. As a result, a unique grid topology and supersonic flow solver was developed at Northrop Grumman based on the characteristic behavior of supersonic wave patterns emanating from the aircraft. Using this approach, it was possible to compute flow fields with adequate resolution several body lengths below the aircraft. In this region, three-dimensional effects are diminished and conventional two-dimensional modified linear theory (MLT) can be applied to estimate ground pressure signatures or sonic booms. To accommodate real aircraft geometries and alleviate the burdensome grid generation task, an implicit marching multi-block, multi-grid finite-volume Euler code was developed as the basis for the sonic boom prediction methodology. The Thomas two-dimensional extrapolation method is built into the Euler code so that ground signatures can be obtained quickly and efficiently with minimum computational effort suitable to the aircraft design environment. The loudness levels of these signatures can then be determined using a NASA generated noise code. Since the Euler code is a three-dimensional flow field solver, the complete circumferential region below the aircraft is computed. The extrapolation of all this field data from a cylinder of constant radius leads to the definition of the entire boom corridor occurring directly below and off to the side of the aircraft's flight path yielding an estimate for the entire noise "annoyance" corridor in miles as well as its magnitude. An automated multidisciplinary sonic boom design optimization software system was developed during the latter part of HSR Phase 1. Using this system, it was found that sonic boom signatures could be reduced through optimization of a variety of geometric aircraft parameters. This system uses a gradient based nonlinear optimizer as the driver in conjunction with a computationally efficient Euler CFD solver (NIIM3DSB) for computing the three-dimensional near-field characteristics of the aircraft. The intent of the design system is to identify and optimize geometric design variables that have a beneficial impact on the ground sonic boom. The system uses a simple wave drag data format to specify the aircraft geometry. The geometry is internally enhanced and analytic methods are used to generate marching grids suitable for the multi-block Euler solver. The Thomas extrapolation method is integrated into this system, and hence, the aircraft's centerline ground sonic boom signature is also automatically computed for a specified cruise altitude and yields the parameters necessary to evaluate the design function. The entire design system has been automated since the gradient based optimization software requires many flow analyses in order to obtain the required sensitivity derivatives for each design variable in order to converge on an optimal solution. Hence, once the problem is defined which includes defining the objective function and geometric and aerodynamic constraints, the system will automatically regenerate the perturbed geometry, the necessary grids, the Euler solution, and finally the ground sonic boom signature at the request of the optimizer.
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Stasinopoulos, I; Weichselbaumer, S; Bauer, A; Waizner, J; Berger, H; Garst, M; Pfleiderer, C; Grundler, D
2017-08-01
Linear dichroism - the polarization dependent absorption of electromagnetic waves- is routinely exploited in applications as diverse as structure determination of DNA or polarization filters in optical technologies. Here filamentary absorbers with a large length-to-width ratio are a prerequisite. For magnetization dynamics in the few GHz frequency regime strictly linear dichroism was not observed for more than eight decades. Here, we show that the bulk chiral magnet Cu 2 OSeO 3 exhibits linearly polarized magnetization dynamics at an unexpectedly small frequency of about 2 GHz at zero magnetic field. Unlike optical filters that are assembled from filamentary absorbers, the magnet is shown to provide linear polarization as a bulk material for an extremely wide range of length-to-width ratios. In addition, the polarization plane of a given mode can be switched by 90° via a small variation in width. Our findings shed a new light on magnetization dynamics in that ferrimagnetic ordering combined with antisymmetric exchange interaction offers strictly linear polarization and cross-polarized modes for a broad spectrum of sample shapes at zero field. The discovery allows for novel design rules and optimization of microwave-to-magnon transduction in emerging microwave technologies.
Rayleigh scattering of linear alkylbenzene in large liquid scintillator detectors.
Zhou, Xiang; Liu, Qian; Wurm, Michael; Zhang, Qingmin; Ding, Yayun; Zhang, Zhenyu; Zheng, Yangheng; Zhou, Li; Cao, Jun; Wang, Yifang
2015-07-01
Rayleigh scattering poses an intrinsic limit for the transparency of organic liquid scintillators. This work focuses on the Rayleigh scattering length of linear alkylbenzene (LAB), which will be used as the solvent of the liquid scintillator in the central detector of the Jiangmen Underground Neutrino Observatory. We investigate the anisotropy of the Rayleigh scattering in LAB, showing that the resulting Rayleigh scattering length will be significantly shorter than reported before. Given the same overall light attenuation, this will result in a more efficient transmission of photons through the scintillator, increasing the amount of light collected by the photosensors and thereby the energy resolution of the detector.
Determination of surface layer parameters at the edge of a suburban area
NASA Astrophysics Data System (ADS)
Likso, T.; Pandžić, K.
2012-05-01
Vertical wind and air temperature profile related parameters in the surface layer at the edge of suburban area of Zagreb (Croatia) have been considered. For that purpose, adopted Monin-Obukhov similarity theory and a set of observations of wind and air temperature at 2 and 10 m above ground, recorded in 2005, have been used. The root mean square differences (errors) principle has been used as a tool to estimate the effective roughness length as well as standard deviations of wind speed and wind gusts. The results of estimation are effective roughness lengths dependent on eight wind direction sectors unknown before. Gratefully to that achievement, representativeness of wind data at standard 10-m height can be clarified more deeply for an area of at least about 1 km in upwind direction from the observation site. Extrapolation of wind data for lower or higher levels from standard 10-m height are thus properly representative for a wider inhomogeneous suburban area and can be used as such in numerical models, flux and wind energy estimation, civil engineering, air pollution and climatological applications.
Maximal liquid bridges between horizontal cylinders
NASA Astrophysics Data System (ADS)
Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.
2016-08-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.
Conical Pendulum--Linearization Analyses
ERIC Educational Resources Information Center
Dean, Kevin; Mathew, Jyothi
2016-01-01
A theoretical analysis is presented, showing the derivations of seven different linearization equations for the conical pendulum period "T", as a function of radial and angular parameters. Experimental data obtained over a large range of fixed conical pendulum lengths (0.435 m-2.130 m) are plotted with the theoretical lines and…
Fit Point-Wise AB Initio Calculation Potential Energies to a Multi-Dimension Long-Range Model
NASA Astrophysics Data System (ADS)
Zhai, Yu; Li, Hui; Le Roy, Robert J.
2016-06-01
A potential energy surface (PES) is a fundamental tool and source of understanding for theoretical spectroscopy and for dynamical simulations. Making correct assignments for high-resolution rovibrational spectra of floppy polyatomic and van der Waals molecules often relies heavily on predictions generated from a high quality ab initio potential energy surface. Moreover, having an effective analytic model to represent such surfaces can be as important as the ab initio results themselves. For the one-dimensional potentials of diatomic molecules, the most successful such model to date is arguably the ``Morse/Long-Range'' (MLR) function developed by R. J. Le Roy and coworkers. It is very flexible, is everywhere differentiable to all orders. It incorporates correct predicted long-range behaviour, extrapolates sensibly at both large and small distances, and two of its defining parameters are always the physically meaningful well depth {D}_e and equilibrium distance r_e. Extensions of this model, called the Multi-Dimension Morse/Long-Range (MD-MLR) function, linear molecule-linear molecule systems and atom-non-linear molecule system. have been applied successfully to atom-plus-linear molecule, linear molecule-linear molecule and atom-non-linear molecule systems. However, there are several technical challenges faced in modelling the interactions of general molecule-molecule systems, such as the absence of radial minima for some relative alignments, difficulties in fitting short-range potential energies, and challenges in determining relative-orientation dependent long-range coefficients. This talk will illustrate some of these challenges and describe our ongoing work in addressing them. Mol. Phys. 105, 663 (2007); J. Chem. Phys. 131, 204309 (2009); Mol. Phys. 109, 435 (2011). Phys. Chem. Chem. Phys. 10, 4128 (2008); J. Chem. Phys. 130, 144305 (2009) J. Chem. Phys. 132, 214309 (2010) J. Chem. Phys. 140, 214309 (2010)
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edstrom Jr., D.; et al.
The low-energy section of the photoinjector-based electron linear accelerator at the Fermilab Accelerator Science & Technology (FAST) facility was recently commissioned to an energy of 50 MeV. This linear accelerator relies primarily upon pulsed SRF acceleration and an optional bunch compressor to produce a stable beam within a large operational regime in terms of bunch charge, total average charge, bunch length, and beam energy. Various instrumentation was used to characterize fundamental properties of the electron beam including the intensity, stability, emittance, and bunch length. While much of this instrumentation was commissioned in a 20 MeV running period prior, some (includingmore » a new Martin- Puplett interferometer) was in development or pending installation at that time. All instrumentation has since been recommissioned over the wide operational range of beam energies up to 50 MeV, intensities up to 4 nC/pulse, and bunch structures from ~1 ps to more than 50 ps in length.« less
Alkorta, Ibon; Popelier, Paul L A
2015-02-02
Remarkably simple yet effective linear free energy relationships were discovered between a single ab initio computed bond length in the gas phase and experimental pKa values in aqueous solution. The formation of these relationships is driven by chemical features such as functional groups, meta/para substitution and tautomerism. The high structural content of the ab initio bond length makes a given data set essentially divide itself into high correlation subsets (HCSs). Surprisingly, all molecules in a given high correlation subset share the same conformation in the gas phase. Here we show that accurate pKa values can be predicted from such HCSs. This is achieved within an accuracy of 0.2 pKa units for 5 drug molecules. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Relating Cohesive Zone Model to Linear Elastic Fracture Mechanics
NASA Technical Reports Server (NTRS)
Wang, John T.
2010-01-01
The conditions required for a cohesive zone model (CZM) to predict a failure load of a cracked structure similar to that obtained by a linear elastic fracture mechanics (LEFM) analysis are investigated in this paper. This study clarifies why many different phenomenological cohesive laws can produce similar fracture predictions. Analytical results for five cohesive zone models are obtained, using five different cohesive laws that have the same cohesive work rate (CWR-area under the traction-separation curve) but different maximum tractions. The effect of the maximum traction on the predicted cohesive zone length and the remote applied load at fracture is presented. Similar to the small scale yielding condition for an LEFM analysis to be valid. the cohesive zone length also needs to be much smaller than the crack length. This is a necessary condition for a CZM to obtain a fracture prediction equivalent to an LEFM result.
The P-factor and atomic mass systematics: Application to medium mass nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brenner, D.S.; Haustein, P.E.; Casten, R.F.
1988-01-01
The P formalism was applied to atomic mass systematics for medium and heavy nuclei. The P-factor linearizes the structure-dependent part of the nuclear mass in those regions which are free from subshell effects indicating that the attractive quadrupole p-n force plays an important role in determining the binding of valence nucleons. Where marked non-linearities occur, the P-factor provides a means for recognizing subshell closures and/or other structural features not embodied in the simple assumptions of abrupt shell or subshell changes. These are thought to be regions where the monopole part of the p-n interaction is highly orbit dependent and altersmore » the underlying single-particle structure as a function of A, N or Z. Finally, in those regions where the systematics are smooth and subshells are absent, the P-factor provides a means for predicting masses of some nuclei far-from-stability by interpolation rather than by extrapolation. 5 figs.« less
Unveiling saturation effects from nuclear structure function measurements at the EIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND
Bessonova, O.V.; Khokhlova, V.A.; Canney, M.S.; Bailey, M.R.; Crum, L.A.
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue. PMID:20582159
A derating method for therapeutic applications of high intensity focused ultrasound
NASA Astrophysics Data System (ADS)
Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.
2010-05-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
A DERATING METHOD FOR THERAPEUTIC APPLICATIONS OF HIGH INTENSITY FOCUSED ULTRASOUND.
Bessonova, O V; Khokhlova, V A; Canney, M S; Bailey, M R; Crum, L A
2010-01-01
Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. In this work, a new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal waveforms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, L.J.
1986-02-01
The 5s/sup 2/ /sup 1/S/sub 0/-5s5p/sup 1,3/P/sub J/ energy intervals in the Cd isoelectronic sequence have been investigated through a semiempirical systematization of recent measurements and through the performance of ab initio multiconfiguration Dirac-Fock calculations. Screening-parameter reductions of the spin-orbit and exchange energies both for the observed data and for the theoretically computed values establish the existence of empirical linearities similar to those exploited earlier for the Be, Mg, and Zn sequences. This permits extrapolative isoelectronic predictions of the relative energies of the 5s5p levels, which can be connected to 5s/sup 2/ using intersinglet intervals obtained from empirically corrected abmore » initio calculations. These linearities have also been examined homologously for the Zn, Cd, and Hg sequences, and common relationships have been found that accurately describe all three of these sequences.« less
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Unveiling saturation effects from nuclear structure function measurements at the EIC
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
2017-07-21
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
Evaluation of algorithms for geological thermal-inertia mapping
NASA Technical Reports Server (NTRS)
Miller, S. H.; Watson, K.
1977-01-01
The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).
An automatic multigrid method for the solution of sparse linear systems
NASA Technical Reports Server (NTRS)
Shapira, Yair; Israeli, Moshe; Sidi, Avram
1993-01-01
An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.
Nonlinear effects of stretch on the flame front propagation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halter, F.; Tahtouh, T.; Mounaim-Rousselle, C.
2010-10-15
In all experimental configurations, the flames are affected by stretch (curvature and/or strain rate). To obtain the unstretched flame speed, independent of the experimental configuration, the measured flame speed needs to be corrected. Usually, a linear relationship linking the flame speed to stretch is used. However, this linear relation is the result of several assumptions, which may be incorrected. The present study aims at evaluating the error in the laminar burning speed evaluation induced by using the traditional linear methodology. Experiments were performed in a closed vessel at atmospheric pressure for two different mixtures: methane/air and iso-octane/air. The initial temperaturesmore » were respectively 300 K and 400 K for methane and iso-octane. Both methodologies (linear and nonlinear) are applied and results in terms of laminar speed and burned gas Markstein length are compared. Methane and iso-octane were chosen because they present opposite evolutions in their Markstein length when the equivalence ratio is increased. The error induced by the linear methodology is evaluated, taking the nonlinear methodology as the reference. It is observed that the use of the linear methodology starts to induce substantial errors after an equivalence ratio of 1.1 for methane/air mixtures and before an equivalence ratio of 1 for iso-octane/air mixtures. One solution to increase the accuracy of the linear methodology for these critical cases consists in reducing the number of points used in the linear methodology by increasing the initial flame radius used. (author)« less
Numerical analysis of finite Debye-length effects in induced-charge electro-osmosis.
Gregersen, Misha Marie; Andersen, Mathias Baekbo; Soni, Gaurav; Meinhart, Carl; Bruus, Henrik
2009-06-01
For a microchamber filled with a binary electrolyte and containing a flat unbiased center electrode at one wall, we employ three numerical models to study the strength of the resulting induced-charge electro-osmotic (ICEO) flow rolls: (i) a full nonlinear continuum model resolving the double layer, (ii) a linear slip-velocity model not resolving the double layer and without tangential charge transport inside this layer, and (iii) a nonlinear slip-velocity model extending the linear model by including the tangential charge transport inside the double layer. We show that, compared to the full model, the slip-velocity models significantly overestimate the ICEO flow. This provides a partial explanation of the quantitative discrepancy between observed and calculated ICEO velocities reported in the literature. The discrepancy increases significantly for increasing Debye length relative to the electrode size, i.e., for nanofluidic systems. However, even for electrode dimensions in the micrometer range, the discrepancies in velocity due to the finite Debye length can be more than 10% for an electrode of zero height and more than 100% for electrode heights comparable to the Debye length.
Kingma, J G; Martin, J; Rouleau, J R
1994-07-01
Instantaneous diastolic left coronary artery pressure-flow relations (PFR) shift during acute tamponade as pressure surrounding the heart increases. Coronary pressure at zero flow (Pf = 0) on the linear portion of the PFR is the weighted mean of the different myocardial waterfall pressures, the distribution of which varies across the left ventricular wall during diastole. However, instantaneous PFR measured in large epicardial coronary arteries cannot be used to estimate Pf = 0 in the different myocardial tissue layers. During coronary vasodilatation in a capacitance-free model, myocardial PFR differs from subendocardium to subepicardium. Therefore, we studied the effects of acute tamponade during maximal pharmacology induced coronary vasodilatation on myocardial PFR in in situ anesthetized dogs. Tamponade reduced cardiac output, aortic pressure, and coronary blood flow. Results demonstrate that different mechanisms influence distribution of myocardial blood flow during tamponade. Subepicardial vascular resistance is unchanged and the extrapolated Pf = 0 is increased, thereby shifting PFR to a higher intercept on the pressure axis. Subendocardial vascular resistance is increased while the extrapolated Pf = 0 remains unchanged. Results indicate that in the setting of acute tamponade with coronary vasodilatation different mechanisms regulate the distribution of myocardial blood flow: in the subepicardium only outflow pressure increases, whereas in the subendocardium only vascular resistance increases.
NASA Technical Reports Server (NTRS)
Furnstenau, Norbert; Ellis, Stephen R.
2015-01-01
In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.
Bioaccumulation of heavy metals in fish and Ardeid at Pearl River Estuary, China.
Kwok, C K; Liang, Y; Wang, H; Dong, Y H; Leung, S Y; Wong, M H
2014-08-01
Sediment, fish (tilapia, Oreochromis mossambicus and snakehead, Channa asiatica), eggs and eggshells of Little Egrets (Egretta garzetta) and Chinese Pond Herons (Ardeola bacchus) were collected from Mai Po Ramsar site of Hong Kong, as well as from wetlands in the Gu Cheng County, Shang Hu County and Dafeng Milu National Nature Reserve of Jiangsu Province, China between 2004 and 2007 (n=3-9). Concentrations of six heavy metals were analyzed, based on inductively coupled plasma optical emission spectrometry (ICP-OES). Significant bioaccumulations of Cd (BAF: 165-1271 percent) were observed in the muscle and viscera of large tilapia and snakehead, suggesting potential health risks to the two bird species, as the fishes are the main preys of waterbirds. Significant (p<0.01) linear relationships were obtained between concentrations of Cd, Cr, Cu, Mn, Pb and Zn in the eggs and eggshells of various Ardeid species, and these regression models were used to extrapolate the heavy metal concentrations in the Ardeid eggs of Mai Po. Extrapolated concentrations are consistent with data in the available literature, and advocate the potential use of these models as a non-invasive sampling method for predicting heavy metal contamination in Ardeid eggs. Copyright © 2014 Elsevier Inc. All rights reserved.
Modeling of transitional flows
NASA Technical Reports Server (NTRS)
Lund, Thomas S.
1988-01-01
An effort directed at developing improved transitional models was initiated. The focus of this work was concentrated on the critical assessment of a popular existing transitional model developed by McDonald and Fish in 1972. The objective of this effort was to identify the shortcomings of the McDonald-Fish model and to use the insights gained to suggest modifications or alterations of the basic model. In order to evaluate the transitional model, a compressible boundary layer code was required. Accordingly, a two-dimensional compressible boundary layer code was developed. The program was based on a three-point fully implicit finite difference algorithm where the equations were solved in an uncoupled manner with second order extrapolation used to evaluate the non-linear coefficients. Iteration was offered as an option if the extrapolation error could not be tolerated. The differencing scheme was arranged to be second order in both spatial directions on an arbitrarily stretched mesh. A variety of boundary condition options were implemented including specification of an external pressure gradient, specification of a wall temperature distribution, and specification of an external temperature distribution. Overall the results of the initial phase of this work indicate that the McDonald-Fish model does a poor job at predicting the details of the turbulent flow structure during the transition region.
Rate dependent strengths of some solder joints
NASA Astrophysics Data System (ADS)
Williamson, D. M.; Field, J. E.; Palmer, S. J. P.; Siviour, C. R.
2007-08-01
The shear strengths of three lead-free solder joints have been measured over the range of loading rates 10-3 to ~105 mm min-1. Binary (SnAg), ternary (SnAgCu) and quaternary (Castin: SnAgCuSb) alloys have been compared to a conventional binary SnPb solder alloy. Results show that at loading rates from 10-3 to 102 mm min-1, all four materials exhibit a linear relationship between the shear strength and the loading rate when the data are plotted on a log-log plot. At the highest loading rate of 105 mm min-1, the strengths of the binary alloys were in agreement with extrapolations made from the lower loading rate data. In contrast, the strengths of the higher order alloys were found to be significantly lower than those predicted by extrapolation. This is explained by a change in failure mechanism on the part of the higher order alloys. Similar behaviour was found in measurements of the tensile strengths of solder joints using a novel high-rate loading tensile test. Optical and electron microscopy were used to examine the microstructures of interest in conjunction with energy dispersive x-ray analysis for elemental identification. The effect of artificial aging and reflow of the solder joints is also reported.
Li, Y Q; Varandas, A J C
2010-09-16
An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system which is suitable for dynamics and kinetics studies of the reactions of N(2D) + H2(X1Sigmag+) NH(a1Delta) + H(2S) and their isotopomeric variants. It is obtained by fitting ab initio energies calculated at the multireference configuration interaction level with the aug-cc-pVQZ basis set, after slightly correcting semiempirically the dynamical correlation using the double many-body expansion-scaled external correlation method. The function so obtained is compared in detail with a potential energy surface of the same family obtained by extrapolating the calculated raw energies to the complete basis set limit. The topographical features of the novel global potential energy surface are examined in detail and found to be in general good agreement with those calculated directly from the raw ab initio energies, as well as previous calculations available in the literature. The novel function has been built so as to become degenerate at linear geometries with the ground-state potential energy surface of A'' symmetry reported by our group, where both form a Renner-Teller pair.
Low-energy pion-nucleon scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, W.R.; Ai, L.; Kaufmann, W.B.
An analysis of low-energy charged pion-nucleon data from recent {pi}{sup {plus_minus}}p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f{sup 2}=0.0756{plus_minus}0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P{sub 31} and P{sub 13} partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the {Sigma} term. Off-shell amplitudes are also provided. {copyright} {italmore » 1998} {ital The American Physical Society}« less
Why didn't Box-Jenkins win (again)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pack, D.J.; Downing, D.J.
This paper focuses on the forecasting performance of the Box-Jenkins methodology applied to the 111 time series of the Makridakis competition. It considers the influence of the following factors: (1) time series length, (2) time-series information (autocorrelation) content, (3) time-series outliers or structural changes, (4) averaging results over time series, and (5) forecast time origin choice. It is found that the 111 time series contain substantial numbers of very short series, series with obvious structural change, and series whose histories are relatively uninformative. If these series are typical of those that one must face in practice, the real message ofmore » the competition is that univariate time series extrapolations will frequently fail regardless of the methodology employed to produce them.« less
Safety Assessment of Dialkyl Sulfosuccinate Salts as Used in Cosmetics.
Fiume, Monice M; Heldreth, Bart; Bergfeld, Wilma F; Belsito, Donald V; Hill, Ronald A; Klaassen, Curtis D; Liebler, Daniel C; Marks, James G; Shank, Ronald C; Slaga, Thomas J; Snyder, Paul W; Andersen, F Alan
2016-11-01
The Cosmetic Ingredient Review (CIR) Expert Panel (Panel) assessed the safety of 8 dialkyl sulfosuccinate salts for use in cosmetics, finding that these ingredients are safe in cosmetics in the present practices of use and concentration when formulated to be nonirritating. The dialkyl sulfosuccinate salts primarily function as surfactants in cosmetics. The Panel reviewed the new and existing available animal and clinical data in making its determination of safety. The Panel found it appropriate to extrapolate the data on diethylhexyl sodium sulfosuccinate to assess the safety of the entire group because all of the diesters are of a similar alkyl chain length, all are symmetrically substituted, and all have similar functions in cosmetic formulations. © The Author(s) 2016.
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
NASA Astrophysics Data System (ADS)
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
NASA Technical Reports Server (NTRS)
Goldowsky, Michael P. (Inventor)
1987-01-01
A reciprocating linear motor is formed with a pair of ring-shaped permanent magnets having opposite radial polarizations, held axially apart by a nonmagnetic yoke, which serves as an axially displaceable armature assembly. A pair of annularly wound coils having axial lengths which differ from the axial lengths of the permanent magnets are serially coupled together in mutual opposition and positioned with an outer cylindrical core in axial symmetry about the armature assembly. One embodiment includes a second pair of annularly wound coils serially coupled together in mutual opposition and an inner cylindrical core positioned in axial symmetry inside the armature radially opposite to the first pair of coils. Application of a potential difference across a serial connection of the two pairs of coils creates a current flow perpendicular to the magnetic field created by the armature magnets, thereby causing limited linear displacement of the magnets relative to the coils.
Efficient Third-Order Distributed Feedback Laser with Enhanced Beam Pattern
NASA Technical Reports Server (NTRS)
Hu, Qing (Inventor); Lee, Alan Wei Min (Inventor); Kao, Tsung-Yu (Inventor)
2015-01-01
A third-order distributed feedback laser has an active medium disposed on a substrate as a linear array of segments having a series of periodically spaced interstices therebetween and a first conductive layer disposed on a surface of the active medium on each of the segments and along a strip from each of the segments to a conductive electrical contact pad for application of current along a path including the active medium. Upon application of a current through the active medium, the active medium functions as an optical waveguide, and there is established an alternating electric field, at a THz frequency, both in the active medium and emerging from the interstices. Spacing of adjacent segments is approximately half of a wavelength of the THz frequency in free space or an odd integral multiple thereof, so that the linear array has a coherence length greater than the length of the linear array.
Non-linear wave phenomena in Josephson elements for superconducting electronics
NASA Astrophysics Data System (ADS)
Christiansen, P. L.; Parmentier, R. D.; Skovgaard, O.
1985-07-01
The long and intermediate length Josephson tunnel junction oscillator with overlap geometry of linear and circular configuration, is investigated by computational solution of the perturbed sine-Gordon equation model and by experimental measurements. The model predicts the experimental results very well. Line oscillators as well as ring oscillators are treated. For long junctions soliton perturbation methods are developed and turn out to be efficient prediction tools, also providing physical understanding of the dynamics of the oscillator. For intermediate length junctions expansions in terms of linear cavity modes reduce computational costs. The narrow linewidth of the electromagnetic radiation (typically 1 kHz of a line at 10 GHz) is demonstrated experimentally. Corresponding computer simulations requiring a relative accuracy of less than 10 to the -7th power are performed on supercomputer CRAY-1-S. The broadening of linewidth due to external microradiation and internal thermal noise is determined.
NASA Astrophysics Data System (ADS)
Viesca, R. C.; Rice, J. R.
2011-12-01
We address the nucleation of dynamic landslide rupture in response to gradual pore pressure increases. Nucleation marks the onset of acceleration of the overlying slope mass due to the suddenly rapid enlargement of a sub-surface zone of shear failure, previously deforming quasi-statically. We model that zone as a planar surface undergoing initially linear slip-weakening frictional failure within a bordering linear-elastic medium. The results are also relevant to earthquake nucleation. The sub-surface rupture zone considered runs parallel to the free surface of a uniform slope, under a 2D plane-strain deformation state. We show results for ruptures with friction coefficients following linear slip weakening (i.e., the residual friction is not yet reached). For spatially broad increases in pore pressure, the nucleation length depends on a ratio of depth to a cohesive zone length scale. In the very broad-increase limit, a direct numerical solution for nucleation lengths compares well with solutions to a corresponding eigenvalue problem (similar to Uenishi and Rice [JGR '03]), in which spatial variations in normal stress are neglected. We estimate nucleation lengths for subaerial and submarine conditions using data [e.g., Bishop et al., Géotech. '71; Stark et al., JGGE '05] from ring-shear tests on sediments (peak friction fp = 0.5, frictional slip-weakening rate within the range w = -df/d(slip) = 0.1/cm-1/cm). We assume that only pre-stresses, and not material properties, vary with depth. With such fp and w, we find for a range of subsurface depths and shear moduli μ that nucleation lengths are typically several hundred meters long for shallow undersea slopes, and up to an order of magnitude less for steeper slopes on the Earth's surface. In the submarine case, this puts nucleation lengths in a size range comparable to observed pore-pressure-generated seafloor disturbances as pockmarks [e.g., Gay et al., MG '06].
Smalø, Hans S; Astrand, Per-Olof; Jensen, Lasse
2009-07-28
The electronegativity equalization model (EEM) has been combined with a point-dipole interaction model to obtain a molecular mechanics model consisting of atomic charges, atomic dipole moments, and two-atom relay tensors to describe molecular dipole moments and molecular dipole-dipole polarizabilities. The EEM has been phrased as an atom-atom charge-transfer model allowing for a modification of the charge-transfer terms to avoid that the polarizability approaches infinity for two particles at infinite distance and for long chains. In the present work, these shortcomings have been resolved by adding an energy term for transporting charges through individual atoms. A Gaussian distribution is adopted for the atomic charge distributions, resulting in a damping of the electrostatic interactions at short distances. Assuming that an interatomic exchange term may be described as the overlap between two electronic charge distributions, the EEM has also been extended by a short-range exchange term. The result is a molecular mechanics model where the difference of charge transfer in insulating and metallic systems is modeled regarding the difference in bond length between different types of system. For example, the model is capable of modeling charge transfer in both alkanes and alkenes with alternating double bonds with the same set of carbon parameters only relying on the difference in bond length between carbon sigma- and pi-bonds. Analytical results have been obtained for the polarizability of a long linear chain. These results show that the model is capable of describing the polarizability scaling both linearly and nonlinearly with the size of the system. Similarly, a linear chain with an end atom with a high electronegativity has been analyzed analytically. The dipole moment of this model system can either be independent of the length or increase linearly with the length of the chain. In addition, the model has been parametrized for alkane and alkene chains with data from density functional theory calculations, where the polarizability behaves differently with the chain length. For the molecular dipole moment, the same two systems have been studied with an aldehyde end group. Both the molecular polarizability and the dipole moment are well described as a function of the chain length for both alkane and alkene chains demonstrating the power of the presented model.
Application of Statistical Learning Theory to Plankton Image Analysis
2006-06-01
linear distance interval from 1 to 40 pixels and two directions formula (horizontal & vertical, and diagonals), EF2 is EF with 7 ex- ponential distance...and four directions formula (horizontal, vertical and two diagonals). It is clear that exponential distance inter- val works better than the linear ...PSI - PS by Vincent, linear and pseudo opening and closing spectra, each has 40 elements, total feature length of 160. PS2 - PS modified from Mei- jster
The Gibbs free energy of homogeneous nucleation: From atomistic nuclei to the planar limit.
Cheng, Bingqing; Tribello, Gareth A; Ceriotti, Michele
2017-09-14
In this paper we discuss how the information contained in atomistic simulations of homogeneous nucleation should be used when fitting the parameters in macroscopic nucleation models. We show how the number of solid and liquid atoms in such simulations can be determined unambiguously by using a Gibbs dividing surface and how the free energy as a function of the number of solid atoms in the nucleus can thus be extracted. We then show that the parameters (the chemical potential, the interfacial free energy, and a Tolman correction) of a model based on classical nucleation theory can be fitted using the information contained in these free-energy profiles but that the parameters in such models are highly correlated. This correlation is unfortunate as it ensures that small errors in the computed free energy surface can give rise to large errors in the extrapolated properties of the fitted model. To resolve this problem we thus propose a method for fitting macroscopic nucleation models that uses simulations of planar interfaces and simulations of three-dimensional nuclei in tandem. We show that when the chemical potentials and the interface energy are pinned to their planar-interface values, more precise estimates for the Tolman length are obtained. Extrapolating the free energy profile obtained from small simulation boxes to larger nuclei is thus more reliable.
XENON100 exclusion limit without considering Leff as a nuisance parameter
NASA Astrophysics Data System (ADS)
Davis, Jonathan H.; Bœhm, Céline; Oppermann, Niels; Ensslin, Torsten; Lacroix, Thomas
2012-07-01
In 2011, the XENON100 experiment has set unprecedented constraints on dark matter-nucleon interactions, excluding dark matter candidates with masses down to 6 GeV if the corresponding cross section is larger than 10-39cm2. The dependence of the exclusion limit in terms of the scintillation efficiency (Leff) has been debated at length. To overcome possible criticisms XENON100 performed an analysis in which Leff was considered as a nuisance parameter and its uncertainties were profiled out by using a Gaussian likelihood in which the mean value corresponds to the best fit Leff value (smoothly extrapolated to 0 below 3 keVnr). Although such a method seems fairly robust, it does not account for more extreme types of extrapolation nor does it enable us to anticipate how much the exclusion limit would vary if new data were to support a flat behavior for Leff below 3 keVnr, for example. Yet, such a question is crucial for light dark matter models which are close to the published XENON100 limit. To answer this issue, we use a maximum likelihood ratio analysis, as done by the XENON100 Collaboration, but do not consider Leff as a nuisance parameter. Instead, Leff is obtained directly from the fits to the data. This enables us to define frequentist confidence intervals by marginalizing over Leff.
Mars Exploration Rover Landing Site Hectometer Slopes
NASA Astrophysics Data System (ADS)
Haldemann, A. F.; Anderson, F. S.
2002-12-01
The Mars Exploration Rover (MER) airbag landing system imposes a maximum slope of 5 degrees over 100 m length-scales. This limit avoids dangerous changes in elevation over the horizontal travel distance of the lander on its parachute between the time of the last radar altimeter detection of the surface and the time the retro-rockets fire and the bridle to the airbags is cut. Stereo imagery from the MGS MOC can provide information at this length scale, but MOC stereo coverage is sparse, even when targeted to MER landing sites. Additionally, MGS spacecraft stability issues affect the DEMs at precisely the hectometric length-scale1. The MOLA instrument provides global coverage pulse-width measurements2 over a single MOLA-pulse footprint, which is about 100 m in diameter. However, the pulse spread only provides an upper bound on the 100 m slope. We chose another approach. We sample the inter-pulse root-mean-square (RMS) height deviations for MOLA track segments restricted to pixels of 0.1 deg latitude by 0.1 deg longitude. Then, under the assumption of self-affine topography, we determine the scale-dependence of the RMS deviations and extrapolate that behavior over the range of 300 m to 1.2 km downward to the 100 m scale. Shepard et al.3 clearly summarize the statistical properties of the RMS deviation (noting that it also goes by the name structure function, variogram or Allan deviation), and we follow their nomenclature. The RMS deviation is a useful measure in that it can be directly converted to RMS-slope for a given length-scale. We map the results of this self-affine extrapolation method for each of the proposed MER landing sites as well as Viking Lander 1 (VL1) and Pathfiner (MPF). In order of decreasing average hectometer RMS-slopes, Melas (about 4.5 degrees) > Elysium EP80 > Gusev > MPF > Elysium EP78 > VL1 > Athabasca > Isidis > Hematite (about 1 degree). We also map the scaling parameter (Hurst exponent); its behavior in the MER landing site regions is interesting in how it ties together the regional behavior of kilometer slopes (directly measured with MOLA) with the decameter and meter slopes (locally derived from stereo image analysis or radar scattering). 1Kirk, R. L., E. Howington-Kraus, and B. A. Archinal, Int. Arch. Photogramm. Remote Sens., XXVIII(B4), 476 (CD-ROM), 2001; Kirk, R. L., E. Howington-Kraus, and B. A. Archinal, Lunar Planet Sci., XXXIII, abs 1988, 2002. 2Garvin, J. B., and J. J. Frawley, Lunar Planet. Sci., XXXI, abs 1884, 2000. 3Shepard, M. K., R. A. Brackett, and R. E. Arvidson, J. Geophys. Res., 100, 11709-11718, 1995.; Shepard, M. K., et al., J. Geophys. Res., 106, 32777-32796, 2001.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Relationship between photoreceptor outer segment length and visual acuity in diabetic macular edema.
Forooghian, Farzin; Stetson, Paul F; Meyer, Scott A; Chew, Emily Y; Wong, Wai T; Cukras, Catherine; Meyerle, Catherine B; Ferris, Frederick L
2010-01-01
The purpose of this study was to quantify photoreceptor outer segment (PROS) length in 27 consecutive patients (30 eyes) with diabetic macular edema using spectral domain optical coherence tomography and to describe the correlation between PROS length and visual acuity. Three spectral domain-optical coherence tomography scans were performed on all eyes during each session using Cirrus HD-OCT. A prototype algorithm was developed for quantitative assessment of PROS length. Retinal thicknesses and PROS lengths were calculated for 3 parameters: macular grid (6 x 6 mm), central subfield (1 mm), and center foveal point (0.33 mm). Intrasession repeatability was assessed using coefficient of variation and intraclass correlation coefficient. The association between retinal thickness and PROS length with visual acuity was assessed using linear regression and Pearson correlation analyses. The main outcome measures include intrasession repeatability of macular parameters and correlation of these parameters with visual acuity. Mean retinal thickness and PROS length were 298 mum to 381 microm and 30 microm to 32 mum, respectively, for macular parameters assessed in this study. Coefficient of variation values were 0.75% to 4.13% for retinal thickness and 1.97% to 14.01% for PROS length. Intraclass correlation coefficient values were 0.96 to 0.99 and 0.73 to 0.98 for retinal thickness and PROS length, respectively. Slopes from linear regression analyses assessing the association of retinal thickness and visual acuity were not significantly different from 0 (P > 0.20), whereas the slopes of PROS length and visual acuity were significantly different from 0 (P < 0.0005). Correlation coefficients for macular thickness and visual acuity ranged from 0.13 to 0.22, whereas coefficients for PROS length and visual acuity ranged from -0.61 to -0.81. Photoreceptor outer segment length can be quantitatively assessed using Cirrus HD-OCT. Although the intrasession repeatability of PROS measurements was less than that of macular thickness measurements, the stronger correlation of PROS length with visual acuity suggests that the PROS measures may be more directly related to visual function. Photoreceptor outer segment length may be a useful physiologic outcome measure, both clinically and as a direct assessment of treatment effects.
Changes in Clavicle Length and Maturation in Americans: 1840-1980.
Langley, Natalie R; Cridlin, Sandra
2016-01-01
Secular changes refer to short-term biological changes ostensibly due to environmental factors. Two well-documented secular trends in many populations are earlier age of menarche and increasing stature. This study synthesizes data on maximum clavicle length and fusion of the medial epiphysis in 1840-1980 American birth cohorts to provide a comprehensive assessment of developmental and morphological change in the clavicle. Clavicles from the Hamann-Todd Human Osteological Collection (n = 354), McKern and Stewart Korean War males (n = 341), Forensic Anthropology Data Bank (n = 1,239), and the McCormick Clavicle Collection (n = 1,137) were used in the analysis. Transition analysis was used to evaluate fusion of the medial epiphysis (scored as unfused, fusing, or fused). Several statistical treatments were used to assess fluctuations in maximum clavicle length. First, Durbin-Watson tests were used to evaluate autocorrelation, and a local regression (LOESS) was used to identify visual shifts in the regression slope. Next, piecewise regression was used to fit linear regression models before and after the estimated breakpoints. Multiple starting parameters were tested in the range determined to contain the breakpoint, and the model with the smallest mean squared error was chosen as the best fit. The parameters from the best-fit models were then used to derive the piecewise models, which were compared with the initial simple linear regression models to determine which model provided the best fit for the secular change data. The epiphyseal union data indicate a decline in the age at onset of fusion since the early twentieth century. Fusion commences approximately four years earlier in mid- to late twentieth-century birth cohorts than in late nineteenth- and early twentieth-century birth cohorts. However, fusion is completed at roughly the same age across cohorts. The most significant decline in age at onset of epiphyseal union appears to have occurred since the mid-twentieth century. LOESS plots show a breakpoint in the clavicle length data around the mid-twentieth century in both sexes, and piecewise regression models indicate a significant decrease in clavicle length in the American population after 1940. The piecewise model provides a slightly better fit than the simple linear model. Since the model standard error is not substantially different from the piecewise model, an argument could be made to select the less complex linear model. However, we chose the piecewise model to detect changes in clavicle length that are overfitted with a linear model. The decrease in maximum clavicle length is in line with a documented narrowing of the American skeletal form, as shown by analyses of cranial and facial breadth and bi-iliac breadth of the pelvis. Environmental influences on skeletal form include increases in body mass index, health improvements, improved socioeconomic status, and elimination of infectious diseases. Secular changes in bony dimensions and skeletal maturation stipulate that medical and forensic standards used to deduce information about growth, health, and biological traits must be derived from modern populations.
The covalently bound diazo group as an infrared probe for hydrogen bonding environments.
You, Min; Liu, Liyuan; Zhang, Wenkai
2017-07-26
Covalently bound diazo groups are frequently found in biomolecular substrates. The C[double bond, length as m-dash]N[double bond, length as m-dash]N asymmetric stretching vibration (ν as ) of the diazo group has a large extinction coefficient and appears in an uncongested spectral region. To evaluate the solvatochromism of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band for studying biomolecules, we recorded the infrared (IR) spectra of a diazo model compound, 2-diazo-3-oxo-butyric acid ethyl ester, in different solvents. The width of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band was linearly dependent on the Kamlet-Taft solvent parameter, which reflects the polarizability and hydrogen bond accepting ability of the solvent. Therefore, the width of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band could be used to probe these properties for a solvent. We found that the position of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band was linearly correlated with the density of hydrogen bond donor groups in the solvent. We studied the relaxation dynamics and spectral diffusion of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band of a natural amino acid, 6-diazo-5-oxo-l-norleucine, in water using nonlinear IR spectroscopy. The relaxation and spectral diffusion time constants of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band were similar to those of the N[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band. We concluded that the position and width of the C[double bond, length as m-dash]N[double bond, length as m-dash]N ν as band of the diazo group could be used to probe the hydrogen bond donating and accepting ability of a solvent, respectively. These results suggest that the diazo group could be used as a site-specific IR probe for the local hydration environments.
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Kurita, N; Ronning, F; Tokiwa, Y; Bauer, E D; Subedi, A; Singh, D J; Thompson, J D; Movshovich, R
2009-04-10
We have performed low-temperature specific heat and thermal conductivity measurements of the Ni-based superconductor BaNi2As2 (T{c}=0.7 K) in a magnetic field. In a zero field, thermal conductivity shows T-linear behavior in the normal state and exhibits a BCS-like exponential decrease below T{c}. The field dependence of the residual thermal conductivity extrapolated to zero temperature is indicative of a fully gapped superconductor. This conclusion is supported by the analysis of the specific heat data, which are well fit by the BCS temperature dependence from T{c} down to the lowest temperature of 0.1 K.
Effect of water vapor on sound absorption in nitrogen at low frequency/pressure ratios
NASA Technical Reports Server (NTRS)
Zuckerwar, A. J.; Griffin, W. A.
1981-01-01
Sound absorption measurements were made in N2-H2O binary mixtures at 297 K over the frequency/pressure range f/P of 0.1-2500 Hz/atm to investigate the vibrational relaxation peak of N2 and its location on f/P axis as a function of humidity. At low humidities the best fit to a linear relationship between the f/P(max) and humidity yields an intercept of 0.013 Hz/atm and a slope of 20,000 Hz/atm-mole fraction. The reaction rate constants derived from this model are lower than those obtained from the extrapolation of previous high-temperature data.
Kinetic limitations on the diffusional control theory of the ablation rate of carbon.
NASA Technical Reports Server (NTRS)
Maahs, H. G.
1971-01-01
It is shown that the theoretical maximum oxidation rate is limited in many cases even at temperatures much higher than 1650 deg K, not by oxygen transport, but by the kinetics of the carbon-oxygen reaction itself. Mass-loss rates have been calculated at air pressures of 0.01 atm, 1 atm, and 100 atm. It is found that at high temperatures the rate of the oxidation reaction is much slower than has generally been assumed on the basis of a simple linear extrapolation of Scala's 'fast' and 'slow' rate expressions. Accordingly it cannot be assumed that a transport limitation inevitably must be reached at high temperatures.
A far-field non-reflecting boundary condition for two-dimensional wake flows
NASA Technical Reports Server (NTRS)
Danowitz, Jeffrey S.; Abarbanel, Saul A.; Turkel, Eli
1995-01-01
Far-field boundary conditions for external flow problems have been developed based upon long-wave perturbations of linearized flow equations about a steady state far field solution. The boundary improves convergence to steady state in single-grid temporal integration schemes using both regular-time-stepping and local-time-stepping. The far-field boundary may be near the trailing edge of the body which significantly reduces the number of grid points, and therefore the computational time, in the numerical calculation. In addition the solution produced is smoother in the far-field than when using extrapolation conditions. The boundary condition maintains the convergence rate to steady state in schemes utilizing multigrid acceleration.
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
Horta, Bernardo Lessa; Victora, Cesar G; de Mola, Christian Loret; Quevedo, Luciana; Pinheiro, Ricardo Tavares; Gigante, Denise P; Motta, Janaina Vieira Dos Santos; Barros, Fernando C
2017-03-01
To assess the associations of birthweight, nutritional status and growth in childhood with IQ, years of schooling, and monthly income at 30 years of age. In 1982, the 5 maternity hospitals in Pelotas, Brazil, were visited daily and 5914 live births were identified. At 30 years of age, 3701 subjects were interviewed. IQ, years of schooling, and income were measured. On average, their IQ was 98 points, they had 11.4 years of schooling, and the mean income was 1593 reais. After controlling for several confounders, birthweight and attained weight and length/height for age at 2 and 4 years of age were associated positively with IQ, years of years of schooling, and income, except for the association between length at 2 years of age and income. Conditional growth analyses were used to disentangle linear growth from relative weight gain. Conditional length at 2 years of age ≥1 SD score above the expected value, compared with ≥1 SD below the expected, was associated with an increase in IQ (4.28 points; 95% CI, 2.66-5.90), years of schooling (1.58 years; 95% CI, 1.08-2.08), and monthly income (303 Brazilian reais; 95% CI, 44-563). Relative weight gain, above what would be expected from linear growth, was not associated with the outcomes. In a middle-income setting, promotion of linear growth in the first 1000 days of life is likely to increase adult IQ, years of schooling, and income. Weight gain in excess of what is expected from linear growth does not seem to improve human capital. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Linear variability of gait according to socioeconomic status in elderly
2016-01-01
Aim: To evaluate the linear variability of comfortable gait according to socioeconomic status in community-dwelling elderly. Method: For this cross-sectional observational study 63 self- functioning elderly were categorized according to the socioeconomic level on medium-low (n= 33, age 69.0 ± 5.0 years) and medium-high (n= 30, age 71.0 ± 6.0 years). Each participant was asked to perform comfortable gait speed for 3 min on an 40 meters elliptical circuit, recording in video five strides which were transformed into frames, determining the minimum foot clearance, maximum foot clearance and stride length. The intra-group linear variability was calculated by the coefficient of variation in percent. Results: The trajectory parameters variability is not different according to socioeconomic status with a 30% (range= 15-55%) for the minimum foot clearance and 6% (range= 3-8%) in maximum foot clearance. Meanwhile, the stride length consistently was more variable in the medium-low socioeconomic status for the overall sample (p= 0.004), female (p= 0.041) and male gender (p= 0.007), with values near 4% (range = 2.5-5.0%) in the medium-low and 2% (range = 1.5-3.5%) in the medium-high. Conclusions: The intra-group linear variability is consistently higher and within reference parameters for stride length during comfortable gait for elderly belonging to medium-low socioeconomic status. This might be indicative of greater complexity and consequent motor adaptability. PMID:27546931
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Neeraj; Büchner, Jörg; Max Planck Institute for Solar System Research, Justus-Von-Liebig-Weg-3, Göttingen
In collisionless magnetic reconnection, electron current sheets (ECS) with thickness of the order of an electron inertial length form embedded inside ion current sheets with thickness of the order of an ion inertial length. These ECS's are susceptible to a variety of instabilities which have the potential to affect the reconnection rate and/or the structure of reconnection. We carry out a three dimensional linear eigen mode stability analysis of electron shear flow driven instabilities of an electron scale current sheet using an electron-magnetohydrodynamic plasma model. The linear growth rate of the fastest unstable mode was found to drop with themore » thickness of the ECS. We show how the nature of the instability depends on the thickness of the ECS. As long as the half-thickness of the ECS is close to the electron inertial length, the fastest instability is that of a translational symmetric two-dimensional (no variations along flow direction) tearing mode. For an ECS half thickness sufficiently larger or smaller than the electron inertial length, the fastest mode is not a tearing mode any more and may have finite variations along the flow direction. Therefore, the generation of plasmoids in a nonlinear evolution of ECS is likely only when the half-thickness is close to an electron inertial length.« less
Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera
Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin
2016-01-01
This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.
2015-09-08
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less
Adaptive convex combination approach for the identification of improper quaternion processes.
Ujang, Bukhari Che; Jahanchahi, Cyrus; Took, Clive Cheong; Mandic, Danilo P
2014-01-01
Data-adaptive optimal modeling and identification of real-world vector sensor data is provided by combining the fractional tap-length (FT) approach with model order selection in the quaternion domain. To account rigorously for the generality of such processes, both second-order circular (proper) and noncircular (improper), the proposed approach in this paper combines the FT length optimization with both the strictly linear quaternion least mean square (QLMS) and widely linear QLMS (WL-QLMS). A collaborative approach based on QLMS and WL-QLMS is shown to both identify the type of processes (proper or improper) and to track their optimal parameters in real time. Analysis shows that monitoring the evolution of the convex mixing parameter within the collaborative approach allows us to track the improperness in real time. Further insight into the properties of those algorithms is provided by establishing a relationship between the steady-state error and optimal model order. The approach is supported by simulations on model order selection and identification of both strictly linear and widely linear quaternion-valued systems, such as those routinely used in renewable energy (wind) and human-centered computing (biomechanics).
Locomotion of inchworm-inspired robot made of smart soft composite (SSC).
Wang, Wei; Lee, Jang-Yeob; Rodrigue, Hugo; Song, Sung-Hyuk; Chu, Won-Shik; Ahn, Sung-Hoon
2014-10-07
A soft-bodied robot made of smart soft composite with inchworm-inspired locomotion capable of both two-way linear and turning movement has been proposed, developed, and tested. The robot was divided into three functional parts based on the different functions of the inchworm: the body, the back foot, and the front foot. Shape memory alloy wires were embedded longitudinally in a soft polymer to imitate the longitudinal muscle fibers that control the abdominal contractions of the inchworm during locomotion. Each foot of the robot has three segments with different friction coefficients to implement the anchor and sliding movement. Then, utilizing actuation patterns between the body and feet based on the looping gait, the robot achieves a biomimetic inchworm gait. Experiments were conducted to evaluate the robot's locomotive performance for both linear locomotion and turning movement. Results show that the proposed robot's stride length was nearly one third of its body length, with a maximum linear speed of 3.6 mm s(-1), a linear locomotion efficiency of 96.4%, a maximum turning capability of 4.3 degrees per stride, and a turning locomotion efficiency of 39.7%.
Biological observations on the crocodile shark Pseudocarcharias kamoharai.
Dai, X J; Zhu, J F; Chen, X J; Xu, L X; Chen, Y
2012-04-01
Sex ratios and gravid characteristics were analysed for the crocodile shark Pseudocarcharias kamoharai from the tropical eastern Pacific Ocean. Gravid females ranged from 80 to 102 cm fork length (L(F) ). The mode litter size was four (two embryos per uterus), mean embryo length was linearly correlated with maternal length (r = 0·465, n = 32); there was no significant difference in L(F) between female and male embryos. © 2011 The Authors. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Prediction of leaf area in individual leaves of cherrybark oak seedlings (Quercus pagoda Raf.)
Yanfei Guo; Brian Lockhart; John Hodges
1995-01-01
The prediction of leaf area for cherrybark oak (Quercus pagoda Raf.) seedlings is important for studying the physiology of the species. Linear and polynomial models involving leaf length, width, fresh weight, dry weight, and internodal length were tested independently and collectively to predict leaf area. Twenty-nine cherrybark oak seedlings were...
Random function theory revisited - Exact solutions versus the first order smoothing conjecture
NASA Technical Reports Server (NTRS)
Lerche, I.; Parker, E. N.
1975-01-01
We remark again that the mathematical conjecture known as first order smoothing or the quasi-linear approximation does not give the correct dependence on correlation length (time) in many cases, although it gives the correct limit as the correlation length (time) goes to zero. In this sense, then, the method is unreliable.
Bond Length Dependence on Quantum States as Shown by Spectroscopy
ERIC Educational Resources Information Center
Lim, Kieran F.
2005-01-01
A discussion on how a spreadsheet simulation of linear-molecular spectra could be used to explore the dependence of rotational band spacing and contours on average bond lengths in the initial and final quantum states is presented. The simulation of hydrogen chloride IR, iodine UV-vis, and nitrogen UV-vis spectra clearly show whether the average…
Using twig diameters to estimate browse utilization on three shrub species in southeastern Montana
Mark A. Rumble
1987-01-01
Browse utilization estimates based on twig length and twig weight were compared for skunkbush sumac, wax currant, and chokecherry. Linear regression analysis was valid for twig length data; twig weight equations are nonlinear. Estimates of twig weight are more accurate. Problems encountered during development of a utilization model are discussed.
Can we predict body height from segmental bone length measurements? A study of 3,647 children.
Cheng, J C; Leung, S S; Chiu, B S; Tse, P W; Lee, C W; Chan, A K; Xia, G; Leung, A K; Xu, Y Y
1998-01-01
It is well known that significant differences exist in the anthropometric data of different races and ethnic groups. This is a cross-sectional study on segmental bone length based on 3,647 Chinese children of equal sex distribution aged 3-18 years. The measurements included standing height, weight, arm span, foot length, and segmental bone length of the humerus, radius, ulna, and tibia. A normality growth chart of all the measured parameters was constructed. Statistical analysis of the results showed a very high linear correlation of height with arm span, foot length, and segmental bone lengths with a correlation coefficient of 0.96-0.99 for both sexes. No differences were found between the right and left side of all the segmental bone lengths. These Chinese children were found to have a proportional limb segmental length relative to the trunk.
Patterns of Growth after Kidney Transplantation among Children with ESRD
Franke, Doris; Thomas, Lena; Steffens, Rena; Pavičić, Leo; Gellermann, Jutta; Froede, Kerstin; Querfeld, Uwe; Haffner, Dieter
2015-01-01
Background and objectives Poor linear growth is a frequent complication of CKD. This study evaluated the effect of kidney transplantation on age-related growth of linear body segments in pediatric renal transplant recipients who were enrolled from May 1998 until August 2013 in the CKD Growth and Development observational cohort study. Design, setting, participants, & measurements Linear growth (height, sitting height, arm and leg lengths) was prospectively investigated during 1639 annual visits in a cohort of 389 pediatric renal transplant recipients ages 2–18 years with a median follow-up of 3.4 years (interquartile range, 1.9–5.9 years). Linear mixed-effects models were used to assess age-related changes and predictors of linear body segments. Results During early childhood, patients showed lower mean SD scores (SDS) for height (−1.7) and a markedly elevated sitting height index (ratio of sitting height to total body height) compared with healthy children (1.6 SDS), indicating disproportionate stunting (each P<0.001). After early childhood a sustained increase in standardized leg length and a constant decrease in standardized sitting height were noted (each P<0.001), resulting in significant catch-up growth and almost complete normalization of sitting height index by adult age (0.4 SDS; P<0.01 versus age 2–4 years). Time after transplantation, congenital renal disease, bone maturation, steroid exposure, degree of metabolic acidosis and anemia, intrauterine growth restriction, and parental height were significant predictors of linear body dimensions and body proportions (each P<0.05). Conclusions Children with ESRD present with disproportionate stunting. In pediatric renal transplant recipients, a sustained increase in standardized leg length and total body height is observed from preschool until adult age, resulting in restoration of body proportions in most patients. Reduction of steroid exposure and optimal metabolic control before and after transplantation are promising measures to further improve growth outcome. PMID:25352379
Patterns of growth after kidney transplantation among children with ESRD.
Franke, Doris; Thomas, Lena; Steffens, Rena; Pavičić, Leo; Gellermann, Jutta; Froede, Kerstin; Querfeld, Uwe; Haffner, Dieter; Živičnjak, Miroslav
2015-01-07
Poor linear growth is a frequent complication of CKD. This study evaluated the effect of kidney transplantation on age-related growth of linear body segments in pediatric renal transplant recipients who were enrolled from May 1998 until August 2013 in the CKD Growth and Development observational cohort study. Linear growth (height, sitting height, arm and leg lengths) was prospectively investigated during 1639 annual visits in a cohort of 389 pediatric renal transplant recipients ages 2-18 years with a median follow-up of 3.4 years (interquartile range, 1.9-5.9 years). Linear mixed-effects models were used to assess age-related changes and predictors of linear body segments. During early childhood, patients showed lower mean SD scores (SDS) for height (-1.7) and a markedly elevated sitting height index (ratio of sitting height to total body height) compared with healthy children (1.6 SDS), indicating disproportionate stunting (each P<0.001). After early childhood a sustained increase in standardized leg length and a constant decrease in standardized sitting height were noted (each P<0.001), resulting in significant catch-up growth and almost complete normalization of sitting height index by adult age (0.4 SDS; P<0.01 versus age 2-4 years). Time after transplantation, congenital renal disease, bone maturation, steroid exposure, degree of metabolic acidosis and anemia, intrauterine growth restriction, and parental height were significant predictors of linear body dimensions and body proportions (each P<0.05). Children with ESRD present with disproportionate stunting. In pediatric renal transplant recipients, a sustained increase in standardized leg length and total body height is observed from preschool until adult age, resulting in restoration of body proportions in most patients. Reduction of steroid exposure and optimal metabolic control before and after transplantation are promising measures to further improve growth outcome. Copyright © 2015 by the American Society of Nephrology.
Chen, Xiaoqiu; Tian, Youhua; Xu, Lin
2015-10-01
Using leaf unfolding and leaf coloration data of a widely distributed herbaceous species, Taraxacum mongolicum, we detected linear trend and temperature response of the growing season at 52 stations from 1990 to 2009. Across the research region, the mean growing season beginning date marginal significantly advanced at a rate of -2.1 days per decade, while the mean growing season end date was significantly delayed at a rate of 3.1 days per decade. The mean growing season length was significantly prolonged at a rate of 5.1 days per decade. Over the 52 stations, linear trends of the beginning date correlate negatively with linear trends of spring temperature, whereas linear trends of the end date and length correlate positively with linear trends of autumn temperature and annual mean temperature. Moreover, the growing season linear trends are also closely related to the growing season responses to temperature and geographic coordinates plus elevation. Regarding growing season responses to temperature, a 1 °C increase in regional mean spring temperature results in an advancement of 2.1 days in regional mean growing season beginning date, and a 1 °C increase in regional mean autumn temperature causes a delay of 2.3 days in regional mean growing season end date. A 1 °C increase in regional annual mean temperature induces an extension of 8.7 days in regional mean growing season length. Over the 52 stations, response of the beginning date to spring temperature depends mainly on local annual mean temperature and geographic coordinates plus elevation. Namely, a 1 °C increase in spring temperature induces a larger advancement of the beginning date at warmer locations with lower latitudes and further west longitudes than at colder locations with higher latitudes and further east longitudes, while a 1 °C increase in spring temperature causes a larger advancement of the beginning date at higher than at lower elevations.
NASA Astrophysics Data System (ADS)
Chen, Xiaoqiu; Tian, Youhua; Xu, Lin
2015-10-01
Using leaf unfolding and leaf coloration data of a widely distributed herbaceous species, Taraxacum mongolicum, we detected linear trend and temperature response of the growing season at 52 stations from 1990 to 2009. Across the research region, the mean growing season beginning date marginal significantly advanced at a rate of -2.1 days per decade, while the mean growing season end date was significantly delayed at a rate of 3.1 days per decade. The mean growing season length was significantly prolonged at a rate of 5.1 days per decade. Over the 52 stations, linear trends of the beginning date correlate negatively with linear trends of spring temperature, whereas linear trends of the end date and length correlate positively with linear trends of autumn temperature and annual mean temperature. Moreover, the growing season linear trends are also closely related to the growing season responses to temperature and geographic coordinates plus elevation. Regarding growing season responses to temperature, a 1 °C increase in regional mean spring temperature results in an advancement of 2.1 days in regional mean growing season beginning date, and a 1 °C increase in regional mean autumn temperature causes a delay of 2.3 days in regional mean growing season end date. A 1 °C increase in regional annual mean temperature induces an extension of 8.7 days in regional mean growing season length. Over the 52 stations, response of the beginning date to spring temperature depends mainly on local annual mean temperature and geographic coordinates plus elevation. Namely, a 1 °C increase in spring temperature induces a larger advancement of the beginning date at warmer locations with lower latitudes and further west longitudes than at colder locations with higher latitudes and further east longitudes, while a 1 °C increase in spring temperature causes a larger advancement of the beginning date at higher than at lower elevations.
Rayleigh scattering of linear alkylbenzene in large liquid scintillator detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Xiang, E-mail: xiangzhou@whu.edu.cn; Zhang, Zhenyu; Liu, Qian
2015-07-15
Rayleigh scattering poses an intrinsic limit for the transparency of organic liquid scintillators. This work focuses on the Rayleigh scattering length of linear alkylbenzene (LAB), which will be used as the solvent of the liquid scintillator in the central detector of the Jiangmen Underground Neutrino Observatory. We investigate the anisotropy of the Rayleigh scattering in LAB, showing that the resulting Rayleigh scattering length will be significantly shorter than reported before. Given the same overall light attenuation, this will result in a more efficient transmission of photons through the scintillator, increasing the amount of light collected by the photosensors and therebymore » the energy resolution of the detector.« less
Small Angle Neutron Scattering Observation of Chain Retraction after a Large Step Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanchard, A.; Heinrich, M.; Pyckhout-Hintzen, W.
The process of retraction in entangled linear chains after a fast nonlinear stretch was detected from time-resolved but quenched small angle neutron scattering (SANS) experiments on long, well-entangled polyisoprene chains. The statically obtained SANS data cover the relevant time regime for retraction, and they provide a direct, microscopic verification of this nonlinear process as predicted by the tube model. Clear, quantitative agreement is found with recent theories of contour length fluctuations and convective constraint release, using parameters obtained mainly from linear rheology. The theory captures the full range of scattering vectors once the crossover to fluctuations on length scales belowmore » the tube diameter is accounted for.« less
Reinforcement of drinking by running: effect of fixed ratio and reinforcement time1
Premack, David; Schaeffer, Robert W.; Hundt, Alan
1964-01-01
Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT. PMID:14120150
REINFORCEMENT OF DRINKING BY RUNNING: EFFECT OF FIXED RATIO AND REINFORCEMENT TIME.
PREMACK, D; SCHAEFFER, R W; HUNDT, A
1964-01-01
Rats were required to complete varying numbers of licks (FR), ranging from 10 to 300, in order to free an activity wheel for predetermined times (CT) ranging from 2 to 20 sec. The reinforcement of drinking by running was shown both by an increased frequency of licking, and by changes in length of the burst of licking relative to operant-level burst length. In log-log coordinates, instrumental licking tended to be a linear increasing function of FR for the range tested, a linear decreasing function of CT for the range tested. Pause time was implicated in both of the above relations, being a generally increasing function of both FR and CT.
Salt Neutrino Detector for Ultrahigh-Energy Neutrinos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiba, M.; Yasuda, O.; Kamijo, T.
2004-11-01
Rock salt and limestone are studied to determine their suitability for use as a radio-wave transmission medium in an ultrahigh energy (UHE) cosmic neutrino detector. A sensible radio wave would be emitted by the coherent Cherenkov radiation from negative excess charges inside an electromagnetic shower upon interaction of a UHE neutrino in a high-density medium (Askar'yan effect). If the attenuation length for the radio wave in the material is large, a relatively small number of radio-wave sensors could detect the interaction occurring in the massive material. We measured the complex permittivity of the rock salt and limestone by the perturbedmore » cavity resonator method at 9.4 and 1 GHz to good precision. We obtained new results of measurements at the frequency at 1.0 GHz. The measured value of the radio-wave attenuation length of synthetic rock salt samples is 1080 m. The samples from the Hockley salt mine in the United States show attenuation length of 180 m at 1 GHz, and then we estimate it by extrapolation to be as long as 900 m at 200 MHz. The results show that there is a possibility of utilizing natural massive deposits of rock salt for a UHE neutrino detector. A salt neutrino detector with a size of 2 x 2 x 2 km would detect 10 UHE neutrino/yr generated through the GZK process.« less
Simulation of two-dimensional adjustable liquid gradient refractive index (L-GRIN) microlens
NASA Astrophysics Data System (ADS)
Le, Zichun; Wu, Xiang; Sun, Yunli; Du, Ying
2017-07-01
In this paper, a two-dimensional liquid gradient refractive index (L-GRIN) microlens is designed which can be used in adjusting focusing direction and focal spot of light beam. Finite element method (FEM) is used to simulate the convection diffusion process happening in core inlet flow and cladding inlet flow. And the ray tracing method shows us the light beam focusing effect including the extrapolation of focal length and output beam spot size. When the flow rates of the core and cladding fluids are held the same between the internal and external, left and right, and upper and lower inlets, the focal length varied from 313 μm to 53.3 μm while the flow rate of liquids ranges from 500 pL/s to 10,000 pL/s. While the core flow rate is bigger than the cladding inlet flow rate, the light beam will focus on a light spot with a tunable size. By adjusting the ratio of cladding inlet flow rate including Qright/Qleft and Qup/Qdown, we get the adjustable two-dimensional focus direction rather than the one-dimensional focusing. In summary, by adjusting the flow rate of core inlet and cladding inlet, the focal length, output beam spot and focusing direction of the input light beam can be manipulated. We suppose this kind of flexible microlens can be used in integrated optics and lab-on-a-chip system.
Harari, Yaniv; Romano, Gal-Hagit; Ungar, Lior; Kupiec, Martin
2013-11-15
Telomeres are nucleoprotein structures that cap the ends of the linear eukaryotic chromosomes, thus protecting their stability and integrity. They play important roles in DNA replication and repair and are central to our understanding of aging and cancer development. In rapidly dividing cells, telomere length is maintained by the activity of telomerase. About 400 TLM (telomere length maintenance) genes have been identified in yeast, as participants of an intricate homeostasis network that keeps telomere length constant. Two papers have recently shown that despite this extremely complex control, telomere length can be manipulated by external stimuli. These results have profound implications for our understanding of cellular homeostatic systems in general and of telomere length maintenance in particular. In addition, they point to the possibility of developing aging and cancer therapies based on telomere length manipulation.
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Cross-species extrapolation of chemical effects: Challenges and new insights
One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Singh, Jagmahender; Pathak, R K; Chavali, Krishnadutt H
2011-03-20
Skeletal height estimation from regression analysis of eight sternal lengths in the subjects of Chandigarh zone of Northwest India is the topic of discussion in this study. Analysis of eight sternal lengths (length of manubrium, length of mesosternum, combined length of manubrium and mesosternum, total sternal length and first four intercostals lengths of mesosternum) measured from 252 male and 91 female sternums obtained at postmortems revealed that mean cadaver stature and sternal lengths were more in North Indians and males than the South Indians and females. Except intercostal lengths, all the sternal lengths were positively correlated with stature of the deceased in both sexes (P < 0.001). The multiple regression analysis of sternal lengths was found more useful than the linear regression for stature estimation. Using multivariate regression analysis, the combined length of manubrium and mesosternum in both sexes and the length of manubrium along with 2nd and 3rd intercostal lengths of mesosternum in males were selected as best estimators of stature. Nonetheless, the stature of males can be predicted with SEE of 6.66 (R(2) = 0.16, r = 0.318) from combination of MBL+BL_3+LM+BL_2, and in females from MBL only, it can be estimated with SEE of 6.65 (R(2) = 0.10, r = 0.318), whereas from the multiple regression analysis of pooled data, stature can be known with SEE of 6.97 (R(2) = 0.387, r = 575) from the combination of MBL+LM+BL_2+TSL+BL_3. The R(2) and F-ratio were found to be statistically significant for almost all the variables in both the sexes, except 4th intercostal length in males and 2nd to 4th intercostal lengths in females. The 'major' sternal lengths were more useful than the 'minor' ones for stature estimation The universal regression analysis used by Kanchan et al. [39] when applied to sternal lengths, gave satisfactory estimates of stature for males only but female stature was comparatively better estimated from simple linear regressions. But they are not proposed for the subjects of known sex, as they underestimate the male and overestimate female stature. However, intercostal lengths were found to be the poor estimators of stature (P < 0.05). And also sternal lengths exhibit weaker correlation coefficients and higher standard errors of estimate. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Walker, K. L.; Norcross, B.
2016-02-01
The Arctic ecosystem has moved into the spotlight of scientific research in recent years due to increased climate change and oil and gas exploration. Arctic fishes and Arctic marine mammals represent key parts of this ecosystem, with fish being a common part of ice seal diets in the Arctic. Determining sizes of fish consumed by ice seals is difficult because otoliths are often the only part left of the fish after digestion. Otolith length is known to be positively related to fish length. By developing species-specific otolith-body morphometric relationships for Arctic marine fishes, fish length can be determined for fish prey found in seal stomachs. Fish were collected during ice free months in the Beaufort and Chukchi seas 2009 - 2014, and the most prevalent species captured were chosen for analysis. Otoliths from eleven fish species from seven families were measured. All species had strong linear relationships between otolith length and fish total length. Nine species had coefficient of determination values over 0.75, indicating that most of the variability in the otolith to fish length relationship was explained by the linear regression. These relationships will be applied to otoliths found in stomachs of three species of ice seals (spotted Phoca largha, ringed Pusa hispida, and bearded Erignathus barbatus) and used to estimate fish total length at time of consumption. Fish lengths can in turn be used to calculate fish weight, enabling further investigation into ice seal energetic demands. This application will aid in understanding how ice seals interact with fish communities in the US Arctic and directly contribute to diet comparisons among and within ice seal species. A better understanding of predator-prey interactions in the US Arctic will aid in predicting how ice seal and fish species will adapt to a changing Arctic.
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
NASA Astrophysics Data System (ADS)
Tahani, Masoud; Askari, Amir R.
2014-09-01
In spite of the fact that pull-in instability of electrically actuated nano/micro-beams has been investigated by many researchers to date, no explicit formula has been presented yet which can predict pull-in voltage based on a geometrically non-linear and distributed parameter model. The objective of present paper is to introduce a simple and accurate formula to predict this value for a fully clamped electrostatically actuated nano/micro-beam. To this end, a non-linear Euler-Bernoulli beam model is employed, which accounts for the axial residual stress, geometric non-linearity of mid-plane stretching, distributed electrostatic force and the van der Waals (vdW) attraction. The non-linear boundary value governing equation of equilibrium is non-dimensionalized and solved iteratively through single-term Galerkin based reduced order model (ROM). The solutions are validated thorough direct comparison with experimental and other existing results reported in previous studies. Pull-in instability under electrical and vdW loads are also investigated using universal graphs. Based on the results of these graphs, non-dimensional pull-in and vdW parameters, which are defined in the text, vary linearly versus the other dimensionless parameters of the problem. Using this fact, some linear equations are presented to predict pull-in voltage, the maximum allowable length, the so-called detachment length, and the minimum allowable gap for a nano/micro-system. These linear equations are also reduced to a couple of universal pull-in formulas for systems with small initial gap. The accuracy of the universal pull-in formulas are also validated by comparing its results with available experimental and some previous geometric linear and closed-form findings published in the literature.
Lutterodt, M C; Rosendahl, M; Yding Andersen, C; Skouby, S O; Byskov, A G
2009-08-01
Reliable age determination of first-trimester human embryos and fetuses is an important parameter for clinical use and basic science. Age determination by ultrasound or morphometric parameters of embryos 4-6 weeks post conception (p.c.) have been questioned, and more accurate methods are required. Data on whether and how maternal smoking and alcohol consumption influence embryonic and fetal foot growth is also lacking. Embryonic tissue from 102 first-trimester legal abortions (aged 35-69 days p.c.) were collected. All women answered a questionnaire concerning smoking and drinking habits, and delivered a urine sample for cotinine analysis. Embryonic age was evaluated by vaginal ultrasound measurements and by post-termination foot length and compared with the Carnegie stages. Foot bud and foot plate were defined and measured as foot length in embryos aged 35-47 days p.c. (range 0.8-2.1 mm). In embryos and fetuses aged 41-69 days p.c., heel-toe length was measured (range 2.5-7.5 mm). We found a significant linear correlation between foot length and age. Morphology of the feet was compared visually with the Carnegie collection, and we found that the mean ages of the two collections correlated well. Foot length was independent of gender, Environmental Tobacco Smoke, maternal smoking and alcohol consumption. Foot length correlated linearly to embryonic and foetal age, and was unaffected by gender, ETS, maternal smoking and alcohol consumption.
On the normalization of the minimum free energy of RNAs by sequence length.
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.
On the Normalization of the Minimum Free Energy of RNAs by Sequence Length
Trotta, Edoardo
2014-01-01
The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size. PMID:25405875
The influence of thresholds on the risk assessment of carcinogens in food.
Pratt, Iona; Barlow, Susan; Kleiner, Juliane; Larsen, John Christian
2009-08-01
The risks from exposure to chemical contaminants in food must be scientifically assessed, in order to safeguard the health of consumers. Risk assessment of chemical contaminants that are both genotoxic and carcinogenic presents particular difficulties, since the effects of such substances are normally regarded as being without a threshold. No safe level can therefore be defined, and this has implications for both risk management and risk communication. Risk management of these substances in food has traditionally involved application of the ALARA (As Low as Reasonably Achievable) principle, however ALARA does not enable risk managers to assess the urgency and extent of the risk reduction measures needed. A more refined approach is needed, and several such approaches have been developed. Low-dose linear extrapolation from animal carcinogenicity studies or epidemiological studies to estimate risks for humans at low exposure levels has been applied by a number of regulatory bodies, while more recently the Margin of Exposure (MOE) approach has been applied by both the European Food Safety Authority and the Joint FAO/WHO Expert Committee on Food Additives. A further approach is the Threshold of Toxicological Concern (TTC), which establishes exposure thresholds for chemicals present in food, dependent on structure. Recent experimental evidence that genotoxic responses may be thresholded has significant implications for the risk assessment of chemicals that are both genotoxic and carcinogenic. In relation to existing approaches such as linear extrapolation, MOE and TTC, the existence of a threshold reduces the uncertainties inherent in such methodology and improves confidence in the risk assessment. However, for the foreseeable future, regulatory decisions based on the concept of thresholds for genotoxic carcinogens are likely to be taken case-by-case, based on convincing data on the Mode of Action indicating that the rate limiting variable for the development of cancer lies on a critical pathway that is thresholded.
Self-Consistent Field Theories for the Role of Large Length-Scale Architecture in Polymers
NASA Astrophysics Data System (ADS)
Wu, David
At large length-scales, the architecture of polymers can be described by a coarse-grained specification of the distribution of branch points and monomer types within a molecule. This includes molecular topology (e.g., cyclic or branched) as well as distances between branch points or chain ends. Design of large length-scale molecular architecture is appealing because it offers a universal strategy, independent of monomer chemistry, to tune properties. Non-linear analogs of linear chains differ in molecular-scale properties, such as mobility, entanglements, and surface segregation in blends that are well-known to impact rheological, dynamical, thermodynamic and surface properties including adhesion and wetting. We have used Self-Consistent Field (SCF) theories to describe a number of phenomena associated with large length-scale polymer architecture. We have predicted the surface composition profiles of non-linear chains in blends with linear chains. These predictions are in good agreement with experimental results, including from neutron scattering, on a range of well-controlled branched (star, pom-pom and end-branched) and cyclic polymer architectures. Moreover, the theory allows explanation of the segregation and conformations of branched polymers in terms of effective surface potentials acting on the end and branch groups. However, for cyclic chains, which have no end or junction points, a qualitatively different topological mechanism based on conformational entropy drives cyclic chains to a surface, consistent with recent neutron reflectivity experiments. We have also used SCF theory to calculate intramolecular and intermolecular correlations for polymer chains in the bulk, dilute solution, and trapped at a liquid-liquid interface. Predictions of chain swelling in dilute star polymer solutions compare favorably with existing PRISM theory and swelling at an interface helps explain recent measurements of chain mobility at an oil-water interface. In collaboration with: Renfeng Hu, Colorado School of Mines, and Mark Foster, University of Akron. This work was supported by NSF Grants No. CBET- 0730692 and No. CBET-0731319.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... External Review Draft of the Guidance for Applying Quantitative Data To Develop Data-Derived Extrapolation... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies...
Chiral extrapolation of nucleon axial charge gA in effective field theory
NASA Astrophysics Data System (ADS)
Li, Hong-na; Wang, P.
2016-12-01
The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
SU-E-T-91: Correction Method to Determine Surface Dose for OSL Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, T; Higgins, P
Purpose: OSL detectors are commonly used in clinic due to their numerous advantages, such as linear response, negligible energy, angle and temperature dependence in clinical range, for verification of the doses beyond the dmax. Although, due to the bulky shielding envelope, this type of detectors fails to measure skin dose, which is an important assessment of patient ability to finish the treatment on time and possibility of acute side effects. This study aims to optimize the methodology of determination of skin dose for conventional accelerators and a flattening filter free Tomotherapy. Methods: Measurements were done for x-ray beams: 6 MVmore » (Varian Clinac 2300, 10×10 cm{sup 2} open field, SSD = 100 cm) and for 5.5 MV (Tomotherapy, 15×40 cm{sup 2} field, SAD = 85 cm). The detectors were placed at the surface of the solid water phantom and at the reference depth (dref=1.7cm (Varian 2300), dref =1.0 cm (Tomotherapy)). The measurements for OSLs were related to the externally exposed OSLs measurements, and further were corrected to surface dose using an extrapolation method indexed to the baseline Attix ion chamber measurements. A consistent use of the extrapolation method involved: 1) irradiation of three OSLs stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. Results: OSL measurements showed an overestimation of surface doses by the factor 2.31 for Varian 2300 and 2.65 for Tomotherapy. The relationships: SD{sup 2300} = 0.68 × M{sup 2300}-12.7 and SDτoμo = 0.73 × Mτoμo-13.1 were found to correct the single OSL measurements to surface doses in agreement with Attix measurements to within 0.1% for both machines. Conclusion: This work provides simple empirical relationships for surface dose measurements using single OSL detectors.« less
Self-Centering Reciprocating-Permanent-Magnet Machine
NASA Technical Reports Server (NTRS)
Bhate, Suresh; Vitale, Nick
1988-01-01
New design for monocoil reciprocating-permanent-magnet electric machine provides self-centering force. Linear permanent-magnet electrical motor includes outer stator, inner stator, and permanent-magnet plunger oscillateing axially between extreme left and right positions. Magnets arranged to produce centering force and allows use of only one coil of arbitrary axial length. Axial length of coil chosen to provide required efficiency and power output.
The correlation function for density perturbations in an expanding universe. I - Linear theory
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1977-01-01
The evolution of the two-point correlation function for adiabatic density perturbations in the early universe is studied. Analytical solutions are obtained for the evolution of linearized spherically symmetric adiabatic density perturbations and the two-point correlation function for these perturbations in the radiation-dominated portion of the early universe. The results are then extended to the regime after decoupling. It is found that: (1) adiabatic spherically symmetric perturbations comparable in scale with the maximum Jeans length would survive the radiation-dominated regime; (2) irregular fluctuations are smoothed out up to the scale of the maximum Jeans length in the radiation era, but regular fluctuations might survive on smaller scales; (3) in general, the only surviving structures for irregularly shaped adiabatic density perturbations of arbitrary but finite scale in the radiation regime are the size of or larger than the maximum Jeans length in that regime; (4) infinite plane waves with a wavelength smaller than the maximum Jeans length but larger than the critical dissipative damping scale could survive the radiation regime; and (5) black holes would also survive the radiation regime and might accrete sufficient mass after decoupling to nucleate the formation of galaxies.
NASA Astrophysics Data System (ADS)
Ren, Diandong; Karoly, David J.
2008-03-01
Observations from seven Central Asian glaciers (35-55°N; 70-95°E) are used, together with regional temperature data, to infer uncertain parameters for a simple linear model of the glacier length variations. The glacier model is based on first order glacier dynamics and requires the knowledge of reference states of forcing and glacier perturbation magnitude. An adjoint-based variational method is used to optimally determine the glacier reference states in 1900 and the uncertain glacier model parameters. The simple glacier model is then used to estimate the glacier length variations until 2060 using regional temperature projections from an ensemble of climate model simulations for a future climate change scenario (SRES A2). For the period 2000-2060, all glaciers are projected to experience substantial further shrinkage, especially those with gentle slopes (e.g., Glacier Chogo Lungma retreats ˜4 km). Although nearly one-third of the year 2000 length will be reduced for some small glaciers, the existence of the glaciers studied here is not threatened by year 2060. The differences between the individual glacier responses are large. No straightforward relationship is found between glacier size and the projected fractional change of its length.
CENP-A and topoisomerase-II antagonistically affect chromosome length.
Ladouceur, A-M; Ranjan, Rajesh; Smith, Lydia; Fadero, Tanner; Heppert, Jennifer; Goldstein, Bob; Maddox, Amy Shaub; Maddox, Paul S
2017-09-04
The size of mitotic chromosomes is coordinated with cell size in a manner dependent on nuclear trafficking. In this study, we conducted an RNA interference screen of the Caenorhabditis elegans nucleome in a strain carrying an exceptionally long chromosome and identified the centromere-specific histone H3 variant CENP-A and the DNA decatenizing enzyme topoisomerase-II (topo-II) as candidate modulators of chromosome size. In the holocentric organism C. elegans , CENP-A is positioned periodically along the entire length of chromosomes, and in mitosis, these genomic regions come together linearly to form the base of kinetochores. We show that CENP-A protein levels decreased through development coinciding with chromosome-size scaling. Partial loss of CENP-A protein resulted in shorter mitotic chromosomes, consistent with a role in setting chromosome length. Conversely, topo-II levels were unchanged through early development, and partial topo-II depletion led to longer chromosomes. Topo-II localized to the perimeter of mitotic chromosomes, excluded from the centromere regions, and depletion of topo-II did not change CENP-A levels. We propose that self-assembly of centromeric chromatin into an extended linear array promotes elongation of the chromosome, whereas topo-II promotes chromosome-length shortening. © 2017 Ladouceur et al.
NASA Astrophysics Data System (ADS)
Afshari, Saied; Hejazi, S. Hossein; Kantzas, Apostolos
2018-05-01
Miscible displacement of fluids in porous media is often characterized by the scaling of the mixing zone length with displacement time. Depending on the viscosity contrast of fluids, the scaling law varies between the square root relationship, a sign for dispersive transport regime during stable displacement, and the linear relationship, which represents the viscous fingering regime during an unstable displacement. The presence of heterogeneities in a porous medium significantly affects the scaling behavior of the mixing length as it interacts with the viscosity contrast to control the mixing of fluids in the pore space. In this study, the dynamics of the flow and transport during both unit and adverse viscosity ratio miscible displacements are investigated in heterogeneous packings of circular grains using pore-scale numerical simulations. The pore-scale heterogeneity level is characterized by the variations of the grain diameter and velocity field. The growth of mixing length is employed to identify the nature of the miscible transport regime at different viscosity ratios and heterogeneity levels. It is shown that as the viscosity ratio increases to higher adverse values, the scaling law of mixing length gradually shifts from dispersive to fingering nature up to a certain viscosity ratio and remains almost the same afterwards. In heterogeneous media, the mixing length scaling law is observed to be generally governed by the variations of the velocity field rather than the grain size. Furthermore, the normalization of mixing length temporal plots with respect to the governing parameters of viscosity ratio, heterogeneity, medium length, and medium aspect ratio is performed. The results indicate that mixing length scales exponentially with log-viscosity ratio and grain size standard deviation while the impact of aspect ratio is insignificant. For stable flows, mixing length scales with the square root of medium length, whereas it changes linearly with length during unstable flows. This scaling procedure allows us to describe the temporal variation of mixing length using a generalized curve for various combinations of the flow conditions and porous medium properties.
Interior Temperature Measurement Using Curved Mercury Capillary Sensor Based on X-ray Radiography
NASA Astrophysics Data System (ADS)
Chen, Shuyue; Jiang, Xing; Lu, Guirong
2017-07-01
A method was presented for measuring the interior temperature of objects using a curved mercury capillary sensor based on X-ray radiography. The sensor is composed of a mercury bubble, a capillary and a fixed support. X-ray digital radiography was employed to capture image of the mercury column in the capillary, and a temperature control system was designed for the sensor calibration. We adopted livewire algorithms and mathematical morphology to calculate the mercury length. A measurement model relating mercury length to temperature was established, and the measurement uncertainty associated with the mercury column length and the linear model fitted by least-square method were analyzed. To verify the system, the interior temperature measurement of an autoclave, which is totally closed, was taken from 29.53°C to 67.34°C. The experiment results show that the response of the system is approximately linear with an uncertainty of maximum 0.79°C. This technique provides a new approach to measure interior temperature of objects.