Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Fink, Hans-Werner; Chushkin, Yuriy
Coherent diffraction imaging is a high-resolution imaging technique whose potential can be greatly enhanced by applying the extrapolation method presented here. We demonstrate the enhancement in resolution of a non-periodical object reconstructed from an experimental X-ray diffraction record which contains about 10% missing information, including the pixels in the center of the diffraction pattern. A diffraction pattern is extrapolated beyond the detector area and as a result, the object is reconstructed at an enhanced resolution and better agreement with experimental amplitudes is achieved. The optimal parameters for the iterative routine and the limits of the extrapolation procedure are discussed.
Interpolation/extrapolation technique with application to hypervelocity impact of space debris
NASA Technical Reports Server (NTRS)
Rule, William K.
1992-01-01
A new technique for the interpolation/extrapolation of engineering data is described. The technique easily allows for the incorporation of additional independent variables, and the most suitable data in the data base is automatically used for each prediction. The technique provides diagnostics for assessing the reliability of the prediction. Two sets of predictions made for known 5-degree-of-freedom, 15-parameter functions using the new technique produced an average coefficient of determination of 0.949. Here, the technique is applied to the prediction of damage to the Space Station from hypervelocity impact of space debris. A new set of impact data is presented for this purpose. Reasonable predictions for bumper damage were obtained, but predictions of pressure wall and multilayer insulation damage were poor.
A nowcasting technique based on application of the particle filter blending algorithm
NASA Astrophysics Data System (ADS)
Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai
2017-10-01
To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.
Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y., E-mail: yu.feng@unsw.edu.au; Lin, S.; Huang, S.
Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation givesmore » a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect.« less
NASA Astrophysics Data System (ADS)
Niedzielski, Tomasz; Kosek, Wiesław
2008-02-01
This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.
NASA Astrophysics Data System (ADS)
Hernández-Pajares, Manuel; Garcia-Fernández, Miquel; Rius, Antonio; Notarpietro, Riccardo; von Engeln, Axel; Olivares-Pulido, Germán.; Aragón-Àngel, Àngela; García-Rigo, Alberto
2017-08-01
The new radio-occultation (RO) instrument on board the future EUMETSAT Polar System-Second Generation (EPS-SG) satellites, flying at a height of 820 km, is primarily focusing on neutral atmospheric profiling. It will also provide an opportunity for RO ionospheric sounding, but only below impact heights of 500 km, in order to guarantee a full data gathering of the neutral part. This will leave a gap of 320 km, which impedes the application of the direct inversion techniques to retrieve the electron density profile. To overcome this challenge, we have looked for new ways (accurate and simple) of extrapolating the electron density (also applicable to other low-Earth orbiting, LEO, missions like CHAMP): a new Vary-Chap Extrapolation Technique (VCET). VCET is based on the scale height behavior, linearly dependent on the altitude above hmF2. This allows extrapolating the electron density profile for impact heights above its peak height (this is the case for EPS-SG), up to the satellite orbital height. VCET has been assessed with more than 3700 complete electron density profiles obtained in four representative scenarios of the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) in the United States and the Formosa Satellite Mission 3 (FORMOSAT-3) in Taiwan, in solar maximum and minimum conditions, and geomagnetically disturbed conditions, by applying an updated Improved Abel Transform Inversion technique to dual-frequency GPS measurements. It is shown that VCET performs much better than other classical Chapman models, with 60% of occultations showing relative extrapolation errors below 20%, in contrast with conventional Chapman model extrapolation approaches with 10% or less of the profiles with relative error below 20%.
Uncertainties Associated with Flux Measurements Due to Heterogeneous Contaminant Distributions
Mass flux and mass discharge measurements at contaminated sites have been applied to assist with remedial management, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Extrapolation across un-sampled space is necessary when usi...
NASA Astrophysics Data System (ADS)
Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel
2017-11-01
The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... External Review Draft of the Guidance for Applying Quantitative Data To Develop Data-Derived Extrapolation... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies... Applying Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies...
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2017-04-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data
NASA Astrophysics Data System (ADS)
Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.
2017-12-01
We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).
A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques
NASA Astrophysics Data System (ADS)
Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.
Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.
Extrapolating bound state data of anions into the metastable domain
NASA Astrophysics Data System (ADS)
Feuerbacher, Sven; Sommerfeld, Thomas; Cederbaum, Lorenz S.
2004-10-01
Computing energies of electronically metastable resonance states is still a great challenge. Both scattering techniques and quantum chemistry based L2 methods are very time consuming. Here we investigate two more economical extrapolation methods. Extrapolating bound states energies into the metastable region using increased nuclear charges has been suggested almost 20 years ago. We critically evaluate this attractive technique employing our complex absorbing potential/Green's function method that allows us to follow a bound state into the continuum. Using the 2Πg resonance of N2- and the 2Πu resonance of CO2- as examples, we found that the extrapolation works suprisingly well. The second extrapolation method involves increasing of bond lengths until the sought resonance becomes stable. The keystone is to extrapolate the attachment energy and not the total energy of the system. This method has the great advantage that the whole potential energy curve is obtained with quite good accuracy by the extrapolation. Limitations of the two techniques are discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-22
... ENVIRONMENTAL PROTECTION AGENCY [EPA-HQ-ORD-2009-0694; FRL-9442-8] Notice of Availability of the External Review Draft of the Guidance for Applying Quantitative Data to Develop Data-Derived Extrapolation... Quantitative Data to Develop Data-Derived Extrapolation Factors for Interspecies and Intraspecies Extrapolation...
Yamamoto, Tetsuya
2007-06-01
A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.
Heat flux measurements on ceramics with thin film thermocouples
NASA Technical Reports Server (NTRS)
Holanda, Raymond; Anderson, Robert C.; Liebert, Curt H.
1993-01-01
Two methods were devised to measure heat flux through a thick ceramic using thin film thermocouples. The thermocouples were deposited on the front and back face of a flat ceramic substrate. The heat flux was applied to the front surface of the ceramic using an arc lamp Heat Flux Calibration Facility. Silicon nitride and mullite ceramics were used; two thicknesses of each material was tested, with ceramic temperatures to 1500 C. Heat flux ranged from 0.05-2.5 MW/m2(sup 2). One method for heat flux determination used an approximation technique to calculate instantaneous values of heat flux vs time; the other method used an extrapolation technique to determine the steady state heat flux from a record of transient data. Neither method measures heat flux in real time but the techniques may easily be adapted for quasi-real time measurement. In cases where a significant portion of the transient heat flux data is available, the calculated transient heat flux is seen to approach the extrapolated steady state heat flux value as expected.
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Y.; Maier, A.; Berger, M.
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior tomore » a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on nontruncated data, even in the presence of severe truncation, compared to a rRMSE of 8.0% when applying a state-of-the-art heuristic extrapolation technique. Conclusions: The method we proposed in this paper leads to a major improvement in image quality for 3D C-arm based VOI imaging. It involves no additional radiation when using fluoroscopic images that are acquired during the patient isocentering process. The model estimation can be readily integrated into the existing interventional workflow without additional hardware.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sparks, R.B.; Aydogan, B.
In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less
Microscale and nanoscale strain mapping techniques applied to creep of rocks
NASA Astrophysics Data System (ADS)
Quintanilla-Terminel, Alejandra; Zimmerman, Mark E.; Evans, Brian; Kohlstedt, David L.
2017-07-01
Usually several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary to bridge the gaps from observations of microstructures, to geomechanical descriptions, to extrapolating from laboratory data to field observations. Here, we describe the experimental and computational techniques involved in microscale strain mapping (MSSM), which allows strain produced during high-pressure, high-temperature deformation experiments to be tracked with high resolution. MSSM relies on the analysis of the relative displacement of initially regularly spaced markers after deformation. We present two lithography techniques used to pattern rock substrates at different scales: photolithography and electron-beam lithography. Further, we discuss the challenges of applying the MSSM technique to samples used in high-temperature and high-pressure experiments. We applied the MSSM technique to a study of strain partitioning during creep of Carrara marble and grain boundary sliding in San Carlos olivine, synthetic forsterite, and Solnhofen limestone at a confining pressure, Pc, of 300 MPa and homologous temperatures, T/Tm, of 0.3 to 0.6. The MSSM technique works very well up to temperatures of 700 °C. The experimental developments described here show promising results for higher-temperature applications.
Percolation analysis of nonlinear structures in scale-free two-dimensional simulations
NASA Technical Reports Server (NTRS)
Dominik, Kurt G.; Shandarin, Sergei F.
1992-01-01
Results are presented of applying percolation analysis to several two-dimensional N-body models which simulate the formation of large-scale structure. Three parameters are estimated: total area (a(c)), total mass (M(C)), and percolation density (rho(c)) of the percolating structure at the percolation threshold for both unsmoothed and smoothed (with different scales L(s)) nonlinear with filamentary structures, confirming early speculations that this type of model has several features of filamentary-type distributions. Also, it is shown that, by properly applying smoothing techniques, many problems previously considered detrimental can be dealt with and overcome. Possible difficulties and prospects with the use of this method are discussed, specifically relating to techniques and methods already applied to CfA deep sky surveys. The success of this test in two dimensions and the potential for extrapolation to three dimensions is also discussed.
NASA Technical Reports Server (NTRS)
Armstrong, Richard; Hardman, Molly
1991-01-01
A snow model that supports the daily, operational analysis of global snow depth and age has been developed. It provides improved spatial interpolation of surface reports by incorporating digital elevation data, and by the application of regionalized variables (kriging) through the use of a global snow depth climatology. Where surface observations are inadequate, the model applies satellite remote sensing. Techniques for extrapolation into data-void mountain areas and a procedure to compute snow melt are also contained in the model.
NASA Astrophysics Data System (ADS)
Coopersmith, Evan J.; Cosh, Michael H.; Bell, Jesse E.; Boyles, Ryan
2016-12-01
Surface soil moisture is a critical parameter for understanding the energy flux at the land atmosphere boundary. Weather modeling, climate prediction, and remote sensing validation are some of the applications for surface soil moisture information. The most common in situ measurement for these purposes are sensors that are installed at depths of approximately 5 cm. There are however, sensor technologies and network designs that do not provide an estimate at this depth. If soil moisture estimates at deeper depths could be extrapolated to the near surface, in situ networks providing estimates at other depths would see their values enhanced. Soil moisture sensors from the U.S. Climate Reference Network (USCRN) were used to generate models of 5 cm soil moisture, with 10 cm soil moisture measurements and antecedent precipitation as inputs, via machine learning techniques. Validation was conducted with the available, in situ, 5 cm resources. It was shown that a 5 cm estimate, which was extrapolated from a 10 cm sensor and antecedent local precipitation, produced a root-mean-squared-error (RMSE) of 0.0215 m3/m3. Next, these machine-learning-generated 5 cm estimates were also compared to AMSR-E estimates at these locations. These results were then compared with the performance of the actual in situ readings against the AMSR-E data. The machine learning estimates at 5 cm produced an RMSE of approximately 0.03 m3/m3 when an optimized gain and offset were applied. This is necessary considering the performance of AMSR-E in locations characterized by high vegetation water contents, which are present across North Carolina. Lastly, the application of this extrapolation technique is applied to the ECONet in North Carolina, which provides a 10 cm depth measurement as its shallowest soil moisture estimate. A raw RMSE of 0.028 m3/m3 was achieved, and with a linear gain and offset applied at each ECONet site, an RMSE of 0.013 m3/m3 was possible.
NASA Astrophysics Data System (ADS)
Mecklenburg, S.; Joss, J.; Schmid, W.
2000-12-01
Nowcasting for hydrological applications is discussed. The tracking algorithm extrapolates radar images in space and time. It originates from the pattern recognition techniques TREC (Tracking Radar Echoes by Correlation, Rinehart and Garvey, J. Appl. Meteor., 34 (1995) 1286) and COTREC (Continuity of TREC vectors, Li et al., Nature, 273 (1978) 287). To evaluate the quality of the extrapolation, a parameter scheme is introduced, able to distinguish between errors in the position and the intensity of the predicted precipitation. The parameters for the position are the absolute error, the relative error and the error of the forecasted direction. The parameters for the intensity are the ratio of the medians and the variations of the rain rate (ratio of two quantiles) between the actual and the forecasted image. To judge the overall quality of the forecast, the correlation coefficient between the forecasted and the actual radar image has been used. To improve the forecast, three aspects have been investigated: (a) Common meteorological attributes of convective cells, derived from a hail statistics, have been determined to optimize the parameters of the tracking algorithm. Using (a), the forecast procedure modifications (b) and (c) have been applied. (b) Small-scale features have been removed by using larger tracking areas and by applying a spatial and temporal smoothing, since problems with the tracking algorithm are mainly caused by small-scale/short-term variations of the echo pattern or because of limitations caused by the radar technique itself (erroneous vectors caused by clutter or shielding). (c) The searching area and the number of searched boxes have been restricted. This limits false detections, which is especially useful in stratiform precipitation and for stationary echoes. Whereas a larger scale and the removal of small-scale features improve the forecasted position for the convective precipitation, the forecast of the stratiform event is not influenced, but limiting the search area leads to a slightly better forecast. The forecast of the intensity is successful for both precipitation events. Forecasting the variation of the rain rate calls for further investigation. Applying COTREC improves the forecast of the convective precipitation, especially for extrapolation times exceeding 30 min.
NASA Technical Reports Server (NTRS)
1971-01-01
A study of techniques for the prediction of crime in the City of Los Angeles was conducted. Alternative approaches to crime prediction (causal, quasicausal, associative, extrapolative, and pattern-recognition models) are discussed, as is the environment within which predictions were desired for the immediate application. The decision was made to use time series (extrapolative) models to produce the desired predictions. The characteristics of the data and the procedure used to choose equations for the extrapolations are discussed. The usefulness of different functional forms (constant, quadratic, and exponential forms) and of different parameter estimation techniques (multiple regression and multiple exponential smoothing) are compared, and the quality of the resultant predictions is assessed.
USDA-ARS?s Scientific Manuscript database
Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DW...
Lightning induced currents in aircraft wiring using low level injection techniques
NASA Technical Reports Server (NTRS)
Stevens, E. G.; Jordan, D. T.
1991-01-01
Various techniques were studied to predict the transient current induced into aircraft wiring bundles as a result of an aircraft lightning strike. A series of aircraft measurements were carried out together with a theoretical analysis using computer modeling. These tests were applied to various aircraft and also to specially constructed cylinders installed within coaxial return conductor systems. Low level swept frequency CW (carrier waves), low level transient and high level transient injection tests were applied to the aircraft and cylinders. Measurements were made to determine the transfer function between the aircraft drive current and the resulting skin currents and currents induced on the internal wiring. The full threat lightning induced transient currents were extrapolated from the low level data using Fourier transform techniques. The aircraft and cylinders used were constructed from both metallic and CFC (carbon fiber composite) materials. The results show the pulse stretching phenomenon which occurs for CFC materials due to the diffusion of the lightning current through carbon fiber materials. Transmission Line Matrix modeling techniques were used to compare theoretical and measured currents.
Chiral extrapolation of nucleon axial charge gA in effective field theory
NASA Astrophysics Data System (ADS)
Li, Hong-na; Wang, P.
2016-12-01
The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morales, Johnny E., E-mail: johnny.morales@lh.org.
Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi.more » From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.« less
Modeling the hyperpolarizability dispersion with the Thomas-Kuhn sum rules
NASA Astrophysics Data System (ADS)
De Mey, Kurt; Perez-Moreno, Javier; Clays, Koen
2011-10-01
The continued interest in molecules that possess large quadratic nonlinear optical (NLO) properties has motivated considerable interplay between molecular synthesis and theory. The screening of viable candidates for NLO applications has been a tedious work, much helped by the advent of the hyper-Rayleigh scattering (HRS) technique. The downside of this technique is the low efficiency, which usually means that measurements have to be performed at wavelengths that are close to the molecular resonances, in the visible area. This means generally that one has to extrapolate the results from HRS characterization to the longer wavelengths that are useful for applications. Such extrapolation is far from trivial and the classic 2-level model can only be used for the most straightforward single charge-transfer chromophores. An alternative is the TKSSOS technique, which uses a few input-hyperpolarizabilities and UV-Vis absorption data to calculate the entire hyperpolarizability spectrum. We have applied this TKS-SOS technique on a set of porphyrines to calculate the hyperpolarizability dispersion. We have also built a tunable HRS set up, capable of determining hyperpolarizabilities in the near infrared (up to 1600 nm). This has allowed us to directly confirm the results predicted in the application region. Due to the very sharp transitions in the hyperpolarizability dispersion, the calculation is subjected to a very precise calibration with respect to the input-hyperpolarizabilities, resulting in very accurate predictions for long wavelength hyperpolarizabilities. Our results not only underscribe the aforementioned technique, but also confirm the use of porphyrines as powerful moieties in NLO applications.
NASA Technical Reports Server (NTRS)
Reddy, C.J.; Deshpande, M.D.
1997-01-01
A hybrid Finite Element Method (FEM)/Method of Moments (MoM) technique in conjunction with the Asymptotic Waveform Evaluation (AWE) technique is applied to obtain radar cross section (RCS) of a cavity-backed aperture in an infinite ground plane over a frequency range. The hybrid FEM/MoM technique when applied to the cavity-backed aperture results in an integro-differential equation with electric field as the unknown variable, the electric field obtained from the solution of the integro-differential equation is expanded in Taylor series. The coefficients of the Taylor series are obtained using the frequency derivatives of the integro-differential equation formed by the hybrid FEM/MoM technique. The series is then matched via the Pade approximation to a rational polynomial, which can be used to extrapolate the electric field over a frequency range. The RCS of the cavity-backed aperture is calculated using the electric field at different frequencies. Numerical results for a rectangular cavity, a circular cavity, and a material filled cavity are presented over a frequency range. Good agreement between AWE and the exact solution over the frequency range is obtained.
Techniques for Accelerating Iterative Methods for the Solution of Mathematical Problems
1989-07-01
m, we can find a solu ion to the problem by using generalized inverses. Hence, ;= Ih.i = GAi = G - where G is of the form (18). A simple choice for V...have understood why I was not available for many of their activities and not home many of the nights. Their love is forever. I have saved the best for...Xk) Extrapolation applied to terms xP through Xk F Operator on x G Iteration function Ik Identity matrix of rank k Solution of the problem or the limit
[On studying the social economic aftermath of neirotrauma].
Potapov, A A; Potapov, N A; Likhterman, L B
2011-01-01
To implement probing medical statistic studies on neiro-trauma the cluster analysis technique was applied to classify the regions of the Russian Federation. The characteristics of social climate, demographic and economic indicators and level of medical service are considered. The eleven clusters are selected and combined into four groups. Thereby, due to possible appropriate extrapolation, the epidemiologic studies concerning the prevalence of craniocerebral and backbone cerebrospinal injuries and their aftermath can be simplified and made cheaper to facilitate the assessment of the impact on economy, demography and social climate of the country.
Applying effective teaching and learning techniques to nephrology education.
Rondon-Berrios, Helbert; Johnston, James R
2016-10-01
The interest in nephrology as a career has declined over the last several years. Some of the reasons cited for this decline include the complexity of the specialty, poor mentoring and inadequate teaching of nephrology from medical school through residency. The purpose of this article is to introduce the reader to advances in the science of adult learning, illustrate best teaching practices in medical education that can be extrapolated to nephrology and introduce the basic teaching methods that can be used on the wards, in clinics and in the classroom.
Estimating the size of an open population using sparse capture-recapture data.
Huggins, Richard; Stoklosa, Jakub; Roach, Cameron; Yip, Paul
2018-03-01
Sparse capture-recapture data from open populations are difficult to analyze using currently available frequentist statistical methods. However, in closed capture-recapture experiments, the Chao sparse estimator (Chao, 1989, Biometrics 45, 427-438) may be used to estimate population sizes when there are few recaptures. Here, we extend the Chao (1989) closed population size estimator to the open population setting by using linear regression and extrapolation techniques. We conduct a small simulation study and apply the models to several sparse capture-recapture data sets. © 2017, The International Biometric Society.
A regularization method for extrapolation of solar potential magnetic fields
NASA Technical Reports Server (NTRS)
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
NASA Astrophysics Data System (ADS)
Baek, Sang-In; Kim, Sung-Jo; Kim, Jong-Hyun
2015-09-01
Although the homeotropic alignment of liquid crystals is widely used in LCD TVs, no easy method exists to measure its anchoring coefficient. In this study, we propose an easy and convenient measurement technique in which a polarizing optical microscope is used in the reflective mode with an objective lens having a low depth of focus. All measurements focus on the reflection of light near the interface between the liquid crystal and alignment layer. The change in the reflected light is measured by applying an electric field. We model the response of the director of the liquid crystal to the electric field and, thus, the change in reflectance. By adjusting the extrapolation length in the calculation, we match the experimental and calculated results and obtain the anchoring coefficient. In our experiment, the extrapolation lengths were 0.31 ± 0.04 μm, 0.32 ± 0.08 μm, and 0.23 ± 0.05 μm for lecithin, AL-64168, and SE-5662, respectively.
Molecular Sieve Bench Testing and Computer Modeling
NASA Technical Reports Server (NTRS)
Mohamadinejad, Habib; DaLee, Robert C.; Blackmon, James B.
1995-01-01
The design of an efficient four-bed molecular sieve (4BMS) CO2 removal system for the International Space Station depends on many mission parameters, such as duration, crew size, cost of power, volume, fluid interface properties, etc. A need for space vehicle CO2 removal system models capable of accurately performing extrapolated hardware predictions is inevitable due to the change of the parameters which influences the CO2 removal system capacity. The purpose is to investigate the mathematical techniques required for a model capable of accurate extrapolated performance predictions and to obtain test data required to estimate mass transfer coefficients and verify the computer model. Models have been developed to demonstrate that the finite difference technique can be successfully applied to sorbents and conditions used in spacecraft CO2 removal systems. The nonisothermal, axially dispersed, plug flow model with linear driving force for 5X sorbent and pore diffusion for silica gel are then applied to test data. A more complex model, a non-darcian model (two dimensional), has also been developed for simulation of the test data. This model takes into account the channeling effect on column breakthrough. Four FORTRAN computer programs are presented: a two-dimensional model of flow adsorption/desorption in a packed bed; a one-dimensional model of flow adsorption/desorption in a packed bed; a model of thermal vacuum desorption; and a model of a tri-sectional packed bed with two different sorbent materials. The programs are capable of simulating up to four gas constituents for each process, which can be increased with a few minor changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Cognata, M.; Spitaleri, C.; Guardo, G. L.
2014-05-02
The {sup 13}C(α,n){sup 16}O reaction is the neutron source of the main component of the s-process. The astrophysical S(E)-factor is dominated by the −3 keV sub-threshold resonance due to the 6.356 MeV level in {sup 17}O. Its contribution is still controversial as extrapolations, e.g., through R-matrix calculations, and indirect techniques, such as the asymptotic normalization coefficient (ANC), yield inconsistent results. Therefore, we have applied the Trojan Horse Method (THM) to the {sup 13}C({sup 6}Li,n{sup 16}O)d reaction to measure its contribution. For the first time, the ANC for the 6.356 MeV level has been deduced through the THM, allowing to attainmore » an unprecedented accuracy. Though a larger ANC for the 6.356 MeV level is measured, our experimental S(E) factor agrees with the most recent extrapolation in the literature in the 140-230 keV energy interval, the accuracy being greatly enhanced thanks to this innovative approach, merging together two well establish indirect techniques, namely, the THM and the ANC.« less
Kwasniok, Frank
2013-11-01
A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.
Proton radius from electron scattering data
NASA Astrophysics Data System (ADS)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad
2016-05-01
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.
Gaia DR1 completeness within 250 pc & star formation history of the Solar neighbourhood
NASA Astrophysics Data System (ADS)
Bernard, Edouard J.
2018-04-01
We took advantage of the Gaia DR1 to combine TGAS parallaxes with Tycho-2 and APASS photometry to calculate the star formation history (SFH) of the solar neighbourhood within 250 pc using the colour-magnitude diagram fitting technique. We present the determination of the completeness within this volume, and compare the resulting SFH with that calculated from the Hipparcos catalogue within 80 pc of the Sun. We also show how this technique will be applied out to ~5 kpc thanks to the next Gaia data releases, which will allow us to quantify the SFH of the thin disc, thick disc and halo in situ, rather than extrapolating based on the stars from these components that are today in the solar neighbourhood.
Müller, Eike H.; Scheichl, Rob; Shardlow, Tony
2015-01-01
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy. PMID:27547075
Müller, Eike H; Scheichl, Rob; Shardlow, Tony
2015-04-08
This paper applies several well-known tricks from the numerical treatment of deterministic differential equations to improve the efficiency of the multilevel Monte Carlo (MLMC) method for stochastic differential equations (SDEs) and especially the Langevin equation. We use modified equations analysis as an alternative to strong-approximation theory for the integrator, and we apply this to introduce MLMC for Langevin-type equations with integrators based on operator splitting. We combine this with extrapolation and investigate the use of discrete random variables in place of the Gaussian increments, which is a well-known technique for the weak approximation of SDEs. We show that, for small-noise problems, discrete random variables can lead to an increase in efficiency of almost two orders of magnitude for practical levels of accuracy.
Mixed-venous oxygen tension by nitrogen rebreathing - A critical, theoretical analysis.
NASA Technical Reports Server (NTRS)
Kelman, G. R.
1972-01-01
There is dispute about the validity of the nitrogen rebreathing technique for determination of mixed-venous oxygen tension. This theoretical analysis examines the circumstances under which the technique is likely to be applicable. When the plateau method is used the probable error in mixed-venous oxygen tension is plus or minus 2.5 mm Hg at rest, and of the order of plus or minus 1 mm Hg during exercise. Provided, that the rebreathing bag size is reasonably chosen, Denison's (1967) extrapolation technique gives results at least as accurate as those obtained by the plateau method. At rest, however, extrapolation should be to 30 rather than to 20 sec.
NASA Astrophysics Data System (ADS)
Bose, A.; Betti, R.; Mangino, D.; Woo, K. M.; Patel, D.; Christopherson, A. R.; Gopalaswamy, V.; Mannion, O. M.; Regan, S. P.; Goncharov, V. N.; Edgell, D. H.; Forrest, C. J.; Frenje, J. A.; Gatu Johnson, M.; Yu Glebov, V.; Igumenshchev, I. V.; Knauer, J. P.; Marshall, F. J.; Radha, P. B.; Shah, R.; Stoeckl, C.; Theobald, W.; Sangster, T. C.; Shvarts, D.; Campbell, E. M.
2018-06-01
This paper describes a technique for identifying trends in performance degradation for inertial confinement fusion implosion experiments. It is based on reconstruction of the implosion core with a combination of low- and mid-mode asymmetries. This technique was applied to an ensemble of hydro-equivalent deuterium-tritium implosions on OMEGA which achieved inferred hot-spot pressures ≈56 ± 7 Gbar [Regan et al., Phys. Rev. Lett. 117, 025001 (2016)]. All the experimental observables pertaining to the core could be reconstructed simultaneously with the same combination of low and mid-modes. This suggests that in addition to low modes, which can cause a degradation of the stagnation pressure, mid-modes are present which reduce the size of the neutron and x-ray producing volume. The systematic analysis shows that asymmetries can cause an overestimation of the total areal density in these implosions. It is also found that an improvement in implosion symmetry resulting from correction of either the systematic mid or low modes would result in an increase in the hot-spot pressure from 56 Gbar to ≈ 80 Gbar and could produce a burning plasma when the implosion core is extrapolated to an equivalent 1.9 MJ symmetric direct illumination [Bose et al., Phys. Rev. E 94, 011201(R) (2016)].
Bose, A.; Betti, R.; Mangino, D.; ...
2018-05-29
This paper describes a technique for identifying trends in performance degradation for inertial con finement fusion implosion experiments. It is based on reconstruction of the implosion core with a combination of low- and mid-mode asymmetries. This technique was applied to an ensemble of hydro-equivalent deuterium-tritium implosions on OMEGA that achieved inferred hot-spot pressures ≈56 ± 7 Gbar [S. Regan et al., Phys. Rev. Lett. 117, 025001 (2016)]. All the experimental observables pertaining to the core could be reconstructed simultaneously with the same combination of low and mid modes. This suggests that in addition to low modes, that can cause amore » degradation of the stagnation pressure, mid modes are present that reduce the size of the neutron and x-ray producing volume. The systematic analysis shows that asymmetries can cause an overestimation of the total areal density in these implosions. Finally, it is also found that an improvement in implosion symmetry resulting from correction of either the systematic mid or low modes would result in an increase of the hot-spot pressure from 56 Gbar to ≈ 80 Gbar and could produce a burning plasma when the implosion core is extrapolated to an equivalent 1.9 MJ symmetric direct illumination [A. Bose et al., Phys. Rev. E 94, 011201(R) (2016)].« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, A.; Betti, R.; Mangino, D.
This paper describes a technique for identifying trends in performance degradation for inertial con finement fusion implosion experiments. It is based on reconstruction of the implosion core with a combination of low- and mid-mode asymmetries. This technique was applied to an ensemble of hydro-equivalent deuterium-tritium implosions on OMEGA that achieved inferred hot-spot pressures ≈56 ± 7 Gbar [S. Regan et al., Phys. Rev. Lett. 117, 025001 (2016)]. All the experimental observables pertaining to the core could be reconstructed simultaneously with the same combination of low and mid modes. This suggests that in addition to low modes, that can cause amore » degradation of the stagnation pressure, mid modes are present that reduce the size of the neutron and x-ray producing volume. The systematic analysis shows that asymmetries can cause an overestimation of the total areal density in these implosions. Finally, it is also found that an improvement in implosion symmetry resulting from correction of either the systematic mid or low modes would result in an increase of the hot-spot pressure from 56 Gbar to ≈ 80 Gbar and could produce a burning plasma when the implosion core is extrapolated to an equivalent 1.9 MJ symmetric direct illumination [A. Bose et al., Phys. Rev. E 94, 011201(R) (2016)].« less
Uncertainty factors in screening ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duke, L.D.; Taggart, M.
2000-06-01
The hazard quotient (HQ) method is commonly used in screening ecological risk assessments (ERAs) to estimate risk to wildlife at contaminated sites. Many ERAs use uncertainty factors (UFs) in the HQ calculation to incorporate uncertainty associated with predicting wildlife responses to contaminant exposure using laboratory toxicity data. The overall objective was to evaluate the current UF methodology as applied to screening ERAs in California, USA. Specific objectives included characterizing current UF methodology, evaluating the degree of conservatism in UFs as applied, and identifying limitations to the current approach. Twenty-four of 29 evaluated ERAs used the HQ approach: 23 of thesemore » used UFs in the HQ calculation. All 24 made interspecies extrapolations, and 21 compensated for its uncertainty, most using allometric adjustments and some using RFs. Most also incorporated uncertainty for same-species extrapolations. Twenty-one ERAs used UFs extrapolating from lowest observed adverse effect level (LOAEL) to no observed adverse effect level (NOAEL), and 18 used UFs extrapolating from subchronic to chronic exposure. Values and application of all UF types were inconsistent. Maximum cumulative UFs ranged from 10 to 3,000. Results suggest UF methodology is widely used but inconsistently applied and is not uniformly conservative relative to UFs recommended in regulatory guidelines and academic literature. The method is limited by lack of consensus among scientists, regulators, and practitioners about magnitudes, types, and conceptual underpinnings of the UF methodology.« less
Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B
1997-05-01
A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.
Probabilistic rainfall warning system with an interactive user interface
NASA Astrophysics Data System (ADS)
Koistinen, Jarmo; Hohti, Harri; Kauhanen, Janne; Kilpinen, Juha; Kurki, Vesa; Lauri, Tuomo; Nurmi, Pertti; Rossi, Pekka; Jokelainen, Miikka; Heinonen, Mari; Fred, Tommi; Moisseev, Dmitri; Mäkelä, Antti
2013-04-01
A real time 24/7 automatic alert system is in operational use at the Finnish Meteorological Institute (FMI). It consists of gridded forecasts of the exceedance probabilities of rainfall class thresholds in the continuous lead time range of 1 hour to 5 days. Nowcasting up to six hours applies ensemble member extrapolations of weather radar measurements. With 2.8 GHz processors using 8 threads it takes about 20 seconds to generate 51 radar based ensemble members in a grid of 760 x 1226 points. Nowcasting exploits also lightning density and satellite based pseudo rainfall estimates. The latter ones utilize convective rain rate (CRR) estimate from Meteosat Second Generation. The extrapolation technique applies atmospheric motion vectors (AMV) originally developed for upper wind estimation with satellite images. Exceedance probabilities of four rainfall accumulation categories are computed for the future 1 h and 6 h periods and they are updated every 15 minutes. For longer forecasts exceedance probabilities are calculated for future 6 and 24 h periods during the next 4 days. From approximately 1 hour to 2 days Poor man's Ensemble Prediction System (PEPS) is used applying e.g. the high resolution short range Numerical Weather Prediction models HIRLAM and AROME. The longest forecasts apply EPS data from the European Centre for Medium Range Weather Forecasts (ECMWF). The blending of the ensemble sets from the various forecast sources is performed applying mixing of accumulations with equal exceedance probabilities. The blending system contains a real time adaptive estimator of the predictability of radar based extrapolations. The uncompressed output data are written to file for each member, having total size of 10 GB. Ensemble data from other sources (satellite, lightning, NWP) are converted to the same geometry as the radar data and blended as was explained above. A verification system utilizing telemetering rain gauges has been established. Alert dissemination e.g. for citizens and professional end users applies SMS messages and, in near future, smartphone maps. The present interactive user interface facilitates free selection of alert sites and two warning thresholds (any rain, heavy rain) at any location in Finland. The pilot service was tested by 1000-3000 users during summers 2010 and 2012. As an example of dedicated end-user services gridded exceedance scenarios (of probabilities 5 %, 50 % and 90 %) of hourly rainfall accumulations for the next 3 hours have been utilized as an online input data for the influent model at the Greater Helsinki Wastewater Treatment Plant.
From intuition to statistics in building subsurface structural models
Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.
2011-01-01
Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.
Dose measurement in heterogeneous phantoms with an extrapolation chamber
NASA Astrophysics Data System (ADS)
Deblois, Francois
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water(TM) and bone-equivalent material was used for determining absolute dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x-rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The air gaps used were between 2 and 3 mm and the sensitive air volume of the extrapolation chamber was remotely controlled through the motion of the motorized piston with a precision of +/-0.0025 mm. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain dose data for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC from 0.7 to ˜2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water(TM) PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). The collecting electrode material in comparison with the polarizing electrode material has a larger effect on the electrode correction factor; the thickness of thin electrodes, on the other hand, has a negligible effect on dose determination. The uncalibrated hybrid PEEC is an accurate and absolute device for measuring the dose directly in bone material in conjunction with appropriate correction factors determined with Monte Carlo techniques.
Image-based optimization of coronal magnetic field models for improved space weather forecasting
NASA Astrophysics Data System (ADS)
Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.
2017-12-01
The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.
A high precision extrapolation method in multiphase-field model for simulating dendrite growth
NASA Astrophysics Data System (ADS)
Yang, Cong; Xu, Qingyan; Liu, Baicheng
2018-05-01
The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.
Technical Progress: Three Ways to Keep Up.
ERIC Educational Resources Information Center
Patterson, J. Wayne; And Others
1988-01-01
The authors analyzed three techniques employed in technological forecasting: (1) brainstorming, (2) extrapolation, and (3) scenario writing. They argue that these techniques have value to practitioners, particularly managers, who are often affected by technological change. (CH)
High-order Newton-penalty algorithms
NASA Astrophysics Data System (ADS)
Dussault, Jean-Pierre
2005-10-01
Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.
A Comparison of Methods for Computing the Residual Resistivity Ratio of High-Purity Niobium
Splett, J. D.; Vecchia, D. F.; Goodrich, L. F.
2011-01-01
We compare methods for estimating the residual resistivity ratio (RRR) of high-purity niobium and investigate the effects of using different functional models. RRR is typically defined as the ratio of the electrical resistances measured at 273 K (the ice point) and 4.2 K (the boiling point of helium at standard atmospheric pressure). However, pure niobium is superconducting below about 9.3 K, so the low-temperature resistance is defined as the normal-state (i.e., non-superconducting state) resistance extrapolated to 4.2 K and zero magnetic field. Thus, the estimated value of RRR depends significantly on the model used for extrapolation. We examine three models for extrapolation based on temperature versus resistance, two models for extrapolation based on magnetic field versus resistance, and a new model based on the Kohler relationship that can be applied to combined temperature and field data. We also investigate the possibility of re-defining RRR so that the quantity is not dependent on extrapolation. PMID:26989580
Proton radius from electron scattering data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Proton radius from electron scattering data
Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; ...
2016-05-31
Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon and Stanford. Methods: We make use of stepwise regression techniques using the F-test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate errormore » estimates. Results: Starting with the precision, low four-momentum transfer (Q 2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F-test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q 2 data on G E to select functions which extrapolate to high Q 2, we find that a Pad´e (N = M = 1) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, G E(Q 2) = (1 + Q 2/0.66 GeV 2) -2. Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extreme low-Q 2 data or by use of the Pad´e approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering result and the muonic hydrogen result are consistent. Lastly, it is the atomic hydrogen results that are the outliers.« less
Challenges of accelerated aging techniques for elastomer lifetime predictions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Challenges of accelerated aging techniques for elastomer lifetime predictions
Gillen, Kenneth T.; Bernstein, R.; Celina, M.
2015-03-01
Elastomers are often degraded when exposed to air or high humidity for extended times (years to decades). Lifetime estimates normally involve extrapolating accelerated aging results made at higher than ambient environments. Several potential problems associated with such studies are reviewed, and experimental and theoretical methods to address them are provided. The importance of verifying time–temperature superposition of degradation data is emphasized as evidence that the overall nature of the degradation process remains unchanged versus acceleration temperature. The confounding effects that occur when diffusion-limited oxidation (DLO) contributes under accelerated conditions are described, and it is shown that the DLO magnitude canmore » be modeled by measurements or estimates of the oxygen permeability coefficient (P Ox) and oxygen consumption rate (Φ). P Ox and Φ measurements can be influenced by DLO, and it is demonstrated how confident values can be derived. In addition, several experimental profiling techniques that screen for DLO effects are discussed. Values of Φ taken from high temperature to temperatures approaching ambient can be used to more confidently extrapolate accelerated aging results for air-aged materials, and many studies now show that Arrhenius extrapolations bend to lower activation energies as aging temperatures are lowered. Furthermore, best approaches for accelerated aging extrapolations of humidity-exposed materials are also offered.« less
Toward a Quantitative Comparison of Magnetic Field Extrapolations and Observed Coronal Loops
NASA Astrophysics Data System (ADS)
Warren, Harry P.; Crump, Nicholas A.; Ugarte-Urra, Ignacio; Sun, Xudong; Aschwanden, Markus J.; Wiegelmann, Thomas
2018-06-01
It is widely believed that loops observed in the solar atmosphere trace out magnetic field lines. However, the degree to which magnetic field extrapolations yield field lines that actually do follow loops has yet to be studied systematically. In this paper, we apply three different extrapolation techniques—a simple potential model, a nonlinear force-free (NLFF) model based on photospheric vector data, and an NLFF model based on forward fitting magnetic sources with vertical currents—to 15 active regions that span a wide range of magnetic conditions. We use a distance metric to assess how well each of these models is able to match field lines to the 12202 loops traced in coronal images. These distances are typically 1″–2″. We also compute the misalignment angle between each traced loop and the local magnetic field vector, and find values of 5°–12°. We find that the NLFF models generally outperform the potential extrapolation on these metrics, although the differences between the different extrapolations are relatively small. The methodology that we employ for this study suggests a number of ways that both the extrapolations and loop identification can be improved.
Earth resources mission performance studies. Volume 2: Simulation results
NASA Technical Reports Server (NTRS)
1974-01-01
Simulations were made at three month intervals to investigate the EOS mission performance over the four seasons of the year. The basic objectives of the study were: (1) to evaluate the ability of an EOS type system to meet a representative set of specific collection requirements, and (2) to understand the capabilities and limitations of the EOS that influence the system's ability to satisfy certain collection objectives. Although the results were obtained from a consideration of a two sensor EOS system, the analysis can be applied to any remote sensing system having similar optical and operational characteristics. While the category related results are applicable only to the specified requirement configuration, the results relating to general capability and limitations of the sensors can be applied in extrapolating to other U.S. based EOS collection requirements. The TRW general purpose mission simulator and analytic techniques discussed in this report can be applied to a wide range of collection and planning problems of earth orbiting imaging systems.
Bounding species distribution models
Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.
Bounding Species Distribution Models
NASA Technical Reports Server (NTRS)
Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.
2011-01-01
Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].
Robust scaling laws for energy confinement time, including radiated fraction, in Tokamaks
NASA Astrophysics Data System (ADS)
Murari, A.; Peluso, E.; Gaudio, P.; Gelfusa, M.
2017-12-01
In recent years, the limitations of scalings in power-law form that are obtained from traditional log regression have become increasingly evident in many fields of research. Given the wide gap in operational space between present-day and next-generation devices, robustness of the obtained models in guaranteeing reasonable extrapolability is a major issue. In this paper, a new technique, called symbolic regression, is reviewed, refined, and applied to the ITPA database for extracting scaling laws of the energy-confinement time at different radiated fraction levels. The main advantage of this new methodology is its ability to determine the most appropriate mathematical form of the scaling laws to model the available databases without the restriction of their having to be power laws. In a completely new development, this technique is combined with the concept of geodesic distance on Gaussian manifolds so as to take into account the error bars in the measurements and provide more reliable models. Robust scaling laws, including radiated fractions as regressor, have been found; they are not in power-law form, and are significantly better than the traditional scalings. These scaling laws, including radiated fractions, extrapolate quite differently to ITER, and therefore they require serious consideration. On the other hand, given the limitations of the existing databases, dedicated experimental investigations will have to be carried out to fully understand the impact of radiated fractions on the confinement in metallic machines and in the next generation of devices.
Pedotransfer Functions in Earth System Science: Challenges and Perspectives
NASA Astrophysics Data System (ADS)
Van Looy, Kris; Bouma, Johan; Herbst, Michael; Koestel, John; Minasny, Budiman; Mishra, Umakant; Montzka, Carsten; Nemes, Attila; Pachepsky, Yakov A.; Padarian, José; Schaap, Marcel G.; Tóth, Brigitta; Verhoef, Anne; Vanderborght, Jan; van der Ploeg, Martine J.; Weihermüller, Lutz; Zacharias, Steffen; Zhang, Yonggen; Vereecken, Harry
2017-12-01
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. In this paper, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.
NASA Astrophysics Data System (ADS)
Farooq, Umar; Myler, Peter
This work is concerned with physical testing of carbon fibrous laminated composite panels with low velocity drop-weight impacts from flat and round nose impactors. Eight, sixteen, and twenty-four ply panels were considered. Non-destructive damage inspections of tested specimens were conducted to approximate impact-induced damage. Recorded data were correlated to load-time, load-deflection, and energy-time history plots to interpret impact induced damage. Data filtering techniques were also applied to the noisy data that unavoidably generate due to limitations of testing and logging systems. Built-in, statistical, and numerical filters effectively predicted load thresholds for eight and sixteen ply laminates. However, flat nose impact of twenty-four ply laminates produced clipped data that can only be de-noised involving oscillatory algorithms. Data filtering and extrapolation of such data have received rare attention in the literature that needs to be investigated. The present work demonstrated filtering and extrapolation of the clipped data using Fast Fourier Convolution algorithm to predict load thresholds. Selected results were compared to the damage zones identified with C-scan and acceptable agreements have been observed. Based on the results it is proposed that use of advanced data filtering and analysis methods to data collected by the available resources has effectively enhanced data interpretations without resorting to additional resources. The methodology could be useful for efficient and reliable data analysis and impact-induced damage prediction of similar cases' data.
High speed civil transport: Sonic boom softening and aerodynamic optimization
NASA Technical Reports Server (NTRS)
Cheung, Samson
1994-01-01
An improvement in sonic boom extrapolation techniques has been the desire of aerospace designers for years. This is because the linear acoustic theory developed in the 60's is incapable of predicting the nonlinear phenomenon of shock wave propagation. On the other hand, CFD techniques are too computationally expensive to employ on sonic boom problems. Therefore, this research focused on the development of a fast and accurate sonic boom extrapolation method that solves the Euler equations for axisymmetric flow. This new technique has brought the sonic boom extrapolation techniques up to the standards of the 90's. Parallel computing is a fast growing subject in the field of computer science because of its promising speed. A new optimizer (IIOWA) for the parallel computing environment has been developed and tested for aerodynamic drag minimization. This is a promising method for CFD optimization making use of the computational resources of workstations, which unlike supercomputers can spend most of their time idle. Finally, the OAW concept is attractive because of its overall theoretical performance. In order to fully understand the concept, a wind-tunnel model was built and is currently being tested at NASA Ames Research Center. The CFD calculations performed under this cooperative agreement helped to identify the problem of the flow separation, and also aided the design by optimizing the wing deflection for roll trim.
Collision frequency of artificial satellites - The creation of a debris belt
NASA Technical Reports Server (NTRS)
Kessler, D. J.; Cour-Palais, B. G.
1978-01-01
The probability of satellite collisions increases with the number of satellites. In the present paper, possible time scales for the growth of a debris belt from collision fragments are determined, and possible consequences of continued unrestrained launch activities are examined. Use is made of techniques formerly developed for studying the evolution (growth) of the asteroid belt. A model describing the flux from the known earth-orbiting satellites is developed, and the results from this model are extrapolated in time to predict the collision frequency between satellites. Hypervelocity impact phenomena are then examined to predict the debris flux resulting from collisions. The results are applied to design requirements for three types of future space missions.
Buckling characteristics of hypersonic aircraft wing tubular panels
NASA Technical Reports Server (NTRS)
Ko, William L.; Shideler, John L.; Fields, Roger A.
1986-01-01
The buckling characteristics of Rene 41 tubular panels installed as wing panels on a hypersonic wing test structure (HWTS) were determined nondestructively through use of a force/stiffness technique. The nondestructive buckling tests were carried out under different combined load conditions and different temperature environments. Two panels were subsequently tested to buckling failure in a universal tension compression testing machine. In spite of some data scattering because of large extrapolations of data points resulting from termination of the test at a somewhat low applied load, the overall test data correlated fairly well with theoretically predicted buckling interaction curves. The structural efficiency of the tubular panels was slightly higher than that of the beaded panels which they replaced.
Measurement of absorbed dose with a bone-equivalent extrapolation chamber.
DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B
2002-03-01
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.
Fytas, Nikolaos G; Martín-Mayor, Víctor
2016-06-01
It was recently shown [Phys. Rev. Lett. 110, 227201 (2013)PRLTAO0031-900710.1103/PhysRevLett.110.227201] that the critical behavior of the random-field Ising model in three dimensions is ruled by a single universality class. This conclusion was reached only after a proper taming of the large scaling corrections of the model by applying a combined approach of various techniques, coming from the zero- and positive-temperature toolboxes of statistical physics. In the present contribution we provide a detailed description of this combined scheme, explaining in detail the zero-temperature numerical scheme and developing the generalized fluctuation-dissipation formula that allowed us to compute connected and disconnected correlation functions of the model. We discuss the error evolution of our method and we illustrate the infinite limit-size extrapolation of several observables within phenomenological renormalization. We present an extension of the quotients method that allows us to obtain estimates of the critical exponent α of the specific heat of the model via the scaling of the bond energy and we discuss the self-averaging properties of the system and the algorithmic aspects of the maximum-flow algorithm used.
NASA Astrophysics Data System (ADS)
Moustafa, Sabry Gad Al-Hak Mohammad
Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is shown to vary slowly with system-size. This allow us to get the FE in the thermodynamic limit by extrapolating the one isomer results to infinity and correct for that by the effect from considering proton-disorder measured at a small system. These techniques are applied to empty hydrates (of types: SI, SII, and SH) to estimate their thermodynamic stability. For conditions where the harmonic model fails, performing MS is needed to estimate rigorously the full (harmonic plus anharmonic) quantity. Although several MS methods are available for that purpose, they do not benefit from the harmonic nature of crystals---which represents the main contribution and is cheap to compute. In other words, those "conventional" methods always "start from scratch" even at states where anharmonic part is negligible. In this work, we develop very efficient MS methods that leverage information, on-the-fly, from the harmonic behavior of configurations such that the anharmonic contributions are directly measured. The approach is named harmonically-mapped averaging (HMA) for the rest of this thesis. Since the major contribution of thermodynamic properties comes from the harmonic nature of crystal, the fluctuations in the anharmonic quantities is to be small; hence, uncertainty associated with the HMA method is small. The HMA method is given in a general formulation such that it can handle properties related to both first- and second-derivatives of free energy. The HMA approach is first applied to Lennard-Jones (LJ) model. First- and second-derivatives of FE with respect to temperature and volume yield the following properties: energy, pressure, isochoric heat capacity, bulk modulus, and thermal pressure coefficient. A considerable improvement in the efficiency of measuring those properties is observed even at melting conditions where anharmonicity is non-negligible. First-derivative properties are computed with 100 to 10,000 times less computational effort, while speedup for the second-derivative properties exceeds a millionfold for the highest density examined. In addition, the finite-size and long-range cutoff effects of the anharmonic contribution is much smaller than those due to harmonic part. Therefore, we were able to get the thermodynamic limit of thermodynamic properties by extrapolating the harmonic contribution to infinity and fix that with the anharmonic contribution from MS of small systems. Moreover, the anharmonic trajectory shows better features than the conventional one; it equilibrates almost instantaneously and data is less correlated (i.e. good statistics can be obtained with shorter trajectory). As a byproduct of the HMA, the free energy along an isochore is computed using thermodynamic integration (TI) technique of energy. Again, the HMA shows substantial improvement (50--1000 speedup) over the well-known Frenkel-Ladd integration (with Einstein crystal reference) method. Finally, to test the method against a more sophisticated model, we applied it to an embedded-atom-model (EAM) model of iron system. The results show a qualitatively similar behavior as that of LJ model. Finally, the method is applied to tackle one of the long-standing problems of Earth science; namely, the crystal structure of the Earth's inner core (IC). (Abstract shortened by UMI.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
The use of extrapolation concepts to augment the Frequency Separation Technique
NASA Astrophysics Data System (ADS)
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
Development of a 3D muon disappearance algorithm for muon scattering tomography
NASA Astrophysics Data System (ADS)
Blackwell, T. B.; Kudryavtsev, V. A.
2015-05-01
Upon passing through a material, muons lose energy, scatter off nuclei and atomic electrons, and can stop in the material. Muons will more readily lose energy in higher density materials. Therefore multiple muon disappearances within a localized volume may signal the presence of high-density materials. We have developed a new technique that improves the sensitivity of standard muon scattering tomography. This technique exploits these muon disappearances to perform non-destructive assay of an inspected volume. Muons that disappear have their track evaluated using a 3D line extrapolation algorithm, which is in turn used to construct a 3D tomographic image of the inspected volume. Results of Monte Carlo simulations that measure muon disappearance in different types of target materials are presented. The ability to differentiate between different density materials using the 3D line extrapolation algorithm is established. Finally the capability of this new muon disappearance technique to enhance muon scattering tomography techniques in detecting shielded HEU in cargo containers has been demonstrated.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-07
... the Draft Guidance of Applying Quantitative Data To Develop Data-Derived Extrapolation Factors for.... SUMMARY: EPA is announcing that Eastern Research Group, Inc. (ERG), a contractor to the EPA, will convene an independent panel of experts to review the draft document, ``Guidance for Applying Quantitative...
NASA Technical Reports Server (NTRS)
Hunter, H. E.; Amato, R. A.
1972-01-01
The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.
Atomization Energies of SO and SO2; Basis Set Extrapolation Revisted
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Arnold, James (Technical Monitor)
1998-01-01
The addition of tight functions to sulphur and extrapolation to the complete basis set limit are required to obtain accurate atomization energies. Six different extrapolation procedures are tried. The best atomization energies come from the series of basis sets that yield the most consistent results for all extrapolation techniques. In the variable alpha approach, alpha values larger than 4.5 or smaller than 3, appear to suggest that the extrapolation may not be reliable. It does not appear possible to determine a reliable basis set series using only the triple and quadruple zeta based sets. The scalar relativistic effects reduce the atomization of SO and SO2 by 0.34 and 0.81 kcal/mol, respectively, and clearly must be accounted for if a highly accurate atomization energy is to be computed. The magnitude of the core-valence (CV) contribution to the atomization is affected by missing diffuse valence functions. The CV contribution is much more stable if basis set superposition errors are accounted for. A similar study of SF, SF(+), and SF6 shows that the best family of basis sets varies with the nature of the S bonding.
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to extrapolate the deflection and slope of the entire structure through the use of System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular wing. It is then applied to test data from a cantilevered swept wing model.
NASA Astrophysics Data System (ADS)
Voronchev, V. T.; Kukulin, V. I.
2000-12-01
An original extrapolation technique developed previously is modified and applied to study nuclear reactions in the 6Li + T system at energies E = 0-2 MeV. Cross sections of gamma-ray-producing reactions 6Li(t,d1)7Li*[0.478] and 6Li(t,p1)8Li*[0.981] with important diagnostic implications are calculated. The (t,d1) nuclear data found exceed those accepted elsewhere by 2.5-3.5 times at sub-barrier energies. The cross sections of the (t,p1) reaction are calculated for the first time.
NASA Astrophysics Data System (ADS)
Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo
2018-04-01
In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.
Application of the backward extrapolation method to pulsed neutron sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Talamo, Alberto; Gohar, Yousry
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Application of the backward extrapolation method to pulsed neutron sources
Talamo, Alberto; Gohar, Yousry
2017-09-23
We report particle detectors operated in pulse mode are subjected to the dead-time effect. When the average of the detector counts is constant over time, correcting for the dead-time effect is simple and can be accomplished by analytical formulas. However, when the average of the detector counts changes over time it is more difficult to take into account the dead-time effect. When a subcritical nuclear assembly is driven by a pulsed neutron source, simple analytical formulas cannot be applied to the measured detector counts to correct for the dead-time effect because of the sharp change of the detector counts overmore » time. This work addresses this issue by using the backward extrapolation method. The latter can be applied not only to a continuous (e.g. californium) external neutron source but also to a pulsed external neutron source (e.g. by a particle accelerator) driving a subcritical nuclear assembly. Finally, the backward extrapolation method allows to obtain from the measured detector counts both the dead-time value and the real detector counts.« less
Semiempirical Theories of the Affinities of Negative Atomic Ions
NASA Technical Reports Server (NTRS)
Edie, John W.
1961-01-01
The determination of the electron affinities of negative atomic ions by means of direct experimental investigation is limited. To supplement the meager experimental results, several semiempirical theories have been advanced. One commonly used technique involves extrapolating the electron affinities along the isoelectronic sequences, The most recent of these extrapolations Is studied by extending the method to Include one more member of the isoelectronic sequence, When the results show that this extension does not increase the accuracy of the calculations, several possible explanations for this situation are explored. A different approach to the problem is suggested by the regularities appearing in the electron affinities. Noting that the regular linear pattern that exists for the ionization potentials of the p electrons as a function of Z, repeats itself for different degrees of ionization q, the slopes and intercepts of these curves are extrapolated to the case of the negative Ion. The method is placed on a theoretical basis by calculating the Slater parameters as functions of q and n, the number of equivalent p-electrons. These functions are no more than quadratic in q and n. The electron affinities are calculated by extending the linear relations that exist for the neutral atoms and positive ions to the negative ions. The extrapolated. slopes are apparently correct, but the intercepts must be slightly altered to agree with experiment. For this purpose one or two experimental affinities (depending on the extrapolation method) are used in each of the two short periods. The two extrapolation methods used are: (A) an isoelectronic sequence extrapolation of the linear pattern as such; (B) the same extrapolation of a linearization of this pattern (configuration centers) combined with an extrapolation of the other terms of the ground configurations. The latter method Is preferable, since it requires only experimental point for each period. The results agree within experimental error with all data, except with the most recent value of C, which lies 10% lower.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Looy, Kris; Bouma, Johan; Herbst, Michael
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. Here in this article, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscalingmore » techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.« less
Van Looy, Kris; Bouma, Johan; Herbst, Michael; ...
2017-12-28
Soil, through its various functions, plays a vital role in the Earth's ecosystems and provides multiple ecosystem services to humanity. Pedotransfer functions (PTFs) are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. Here in this article, we review the existing PTFs and document the new generation of PTFs developed in the different disciplines of Earth system science. To meet the methodological challenges for a successful application in Earth system modeling, we emphasize that PTF development has to go hand in hand with suitable extrapolation and upscalingmore » techniques such that the PTFs correctly represent the spatial heterogeneity of soils. PTFs should encompass the variability of the estimated soil property or process, in such a way that the estimation of parameters allows for validation and can also confidently provide for extrapolation and upscaling purposes capturing the spatial variation in soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration, and organic carbon content, root density, and vegetation water uptake. Further challenges are to be addressed in parameterization of soil erosivity and land use change impacts at multiple scales. We argue that a comprehensive set of PTFs can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques provide a true breakthrough for this, yet further improvements are necessary for methods to deal with uncertainty and to validate applications at global scale.« less
C. Andrew Dolloff; Holly E. Jennings
1997-01-01
We compared estimates of stream habitat at the watershed scale using the basinwide visual estimation technique (BVET) and the representative reach extrapolation technique (RRET) in three small watersheds in the Appalachian Mountains. Within each watershed, all habitat units were sampled by the BVET, in contrast, three or four 100-m reaches were sampled with the RRET....
The design of L1-norm visco-acoustic wavefield extrapolators
NASA Astrophysics Data System (ADS)
Salam, Syed Abdul; Mousa, Wail A.
2018-04-01
Explicit depth frequency-space (f - x) prestack imaging is an attractive mechanism for seismic imaging. To date, the main focus of this method was data migration assuming an acoustic medium, but until now very little work assumed visco-acoustic media. Real seismic data usually suffer from attenuation and dispersion effects. To compensate for attenuation in a visco-acoustic medium, new operators are required. We propose using the L1-norm minimization technique to design visco-acoustic f - x extrapolators. To show the accuracy and compensation of the operators, prestack depth migration is performed on the challenging Marmousi model for both acoustic and visco-acoustic datasets. The final migrated images show that the proposed L1-norm extrapolation results in practically stable and improved resolution of the images.
An automated leaching method for the determination of opal in sediments and particulate matter
NASA Astrophysics Data System (ADS)
Müller, Peter J.; Schneider, Ralph
1993-03-01
An automated leaching method for the analysis of biogenic silica (opal) in sediments and particulate matter is described. The opaline material is extracted with 1 M NaOH at 85°C in a stainless steel vessel under constant stirring, and the increase in dissolved silica is continuously monitored. For this purpose, a minor portion of the leaching solution is cycled to an autoanalyzer and analyzed for dissolved silicon by molybdate-blue spectrophotometry. The resulting absorbance versus time plot is then evaluated according to the extrapolation procedure of DEMASTER (1981). The method has been tested on sponge spicules, radiolarian tests. Recent and Pliocene diatomaceous ooze samples, clay minerals and quartz, artificial sediment mixtures, and on various plankton, sediment trap and sediment samples. The results show that the relevant forms of biogenic opal in Quaternary sediments are quantitatively recovered. The time required for an analysis is dependent on the sample type, ranging from 10 to 20 min for plankton and sediment trap material and up to 40-60 min for Quaternary sediments. The silica co-extracted from silicate minerals is largely compensated for by the applied extrapolation technique. The remaining degree of uncertainty is on the order of 0.4 wt% SiO 2 or less, depending on the clay mineral composition and content.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, A.M.; Klein, S.; Oloff, L.
This paper discusses radionuclide imaging as it applies to bone and implant foot surgery. Where necessary, studies and information from published literature have been extrapolated in an attempt to apply them in differentiating between normal and abnormal healing osteotomies and implant prosthetics.
NASA Astrophysics Data System (ADS)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.
A retrospective evaluation of traffic forecasting techniques.
DOT National Transportation Integrated Search
2016-08-01
Traffic forecasting techniquessuch as extrapolation of previous years traffic volumes, regional travel demand models, or : local trip generation rateshelp planners determine needed transportation improvements. Thus, knowing the accuracy of t...
Numerical methods in acoustics
NASA Astrophysics Data System (ADS)
Candel, S. M.
This paper presents a survey of some computational techniques applicable to acoustic wave problems. Recent advances in wave extrapolation methods, spectral methods and boundary integral methods are discussed and illustrated by specific calculations.
NASA Technical Reports Server (NTRS)
Jaeck, C. L.
1976-01-01
A test was conducted in the Boeing Large Anechoic Chamber to determine static jet noise source locations of six baseline and suppressor nozzle models, and establish a technique for extrapolating near field data into the far field. The test covered nozzle pressure ratios from 1.44 to 2.25 and jet velocities from 412 to 594 m/s at a total temperature of 844 K.
Constructive methods for the ground-state energy of fully interacting fermion gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilera Navarro, V.C.; Baker G.A. Jr.; Benofy, L.P.
1987-11-01
A perturbation scheme based not on the ideal gas but on a system of purely repulsive cores is applied to a typical fully interacting fermion gas. This is ''neutron matter'' interacting via (a) the repulsive ''Bethe homework-problem'' potential, (b) a hard-core--plus--square-well potential, and (c) the Baker-Hind-Kahane modification of the latter, suitable for describing a more accurate two-nucleon potential. Pade extrapolation techniques and generalizations thereof are employed to represent both the density dependence as well as the attractive coupling dependence of the perturbation expansion. Equations of state are constructed and compared with Jastrow--Monte Carlo calculations as well as expectations based onmore » semiempirical mass formulas. Excellent agreement is found with the latter.« less
Methods to determine the growth domain in a multidimensional environmental space.
Le Marc, Yvan; Pin, Carmen; Baranyi, József
2005-04-15
Data from a database on microbial responses to the food environment (ComBase, see www.combase.cc) were used to study the boundary of growth several pathogens (Aeromonas hydrophila, Escherichia coli, Listeria monocytogenes, Yersinia enterocolitica). Two methods were used to evaluate the growth/no growth interface. The first one is an application of the Minimum Convex Polyhedron (MCP) introduced by Baranyi et al. [Baranyi, J., Ross, T., McMeekin, T., Roberts, T.A., 1996. The effect of parameterisation on the performance of empirical models used in Predictive Microbiology. Food Microbiol. 13, 83-91.]. The second method applies logistic regression to define the boundary of growth. The combination of these two different techniques can be a useful tool to handle the problem of extrapolation of predictive models at the growth limits.
Buckling behavior of Rene 41 tubular panels for a hypersonic aircraft wing
NASA Technical Reports Server (NTRS)
Ko, W. L.; Fields, R. A.; Shideler, J. L.
1986-01-01
The buckling characteristics of Rene 41 tubular panels for a hypersonic aircraft wing were investigated. The panels were repeatedly tested for buckling characteristics using a hypersonic wing test structure and a universal tension/compression testing machine. The nondestructive buckling tests were carried out under different combined load conditions and in different temperature environments. The force/stiffness technique was used to determine the buckling loads of the panels. In spite of some data scattering resulting from large extrapolations of the data-fitting curve (because of the termination of applied loads at relatively low percentages of the buckling loads), the overall test data correlate fairly well with theoretically predicted buckling interaction curves. Also, the structural efficiency of the tubular panels was found to be slightly higher than that of beaded panels.
Buckling behavior of Rene 41 tubular panels for a hypersonic aircraft wing
NASA Technical Reports Server (NTRS)
Ko, W. L.; Shideler, J. L.; Fields, R. A.
1986-01-01
The buckling characteristics of Rene 41 tubular panels for a hypersonic aircraft wing were investigated. The panels were repeatedly tested for buckling characteristics using a hypersonic wing test structure and a universal tension/compression testing machine. The nondestructive buckling tests were carried out under different combined load conditions and in different temperature environments. The force/stiffness technique was used to determine the buckling loads of the panel. In spite of some data scattering, resulting from large extrapolations of the data fitting curve (because of the termination of applied loads at relatively low percentages of the buckling loads), the overall test data correlate fairly well with theoretically predicted buckling interaction curves. Also, the structural efficiency of the tubular panels was found to be slightly higher than that of beaded panels.
Extrapolation to Nonequilibrium from Coarse-Grained Response Theory
NASA Astrophysics Data System (ADS)
Basu, Urna; Helden, Laurent; Krüger, Matthias
2018-05-01
Nonlinear response theory, in contrast to linear cases, involves (dynamical) details, and this makes application to many-body systems challenging. From the microscopic starting point we obtain an exact response theory for a small number of coarse-grained degrees of freedom. With it, an extrapolation scheme uses near-equilibrium measurements to predict far-from-equilibrium properties (here, second order responses). Because it does not involve system details, this approach can be applied to many-body systems. It is illustrated in a four-state model and in the near critical Ising model.
Resolution enhancement in digital holography by self-extrapolation of holograms.
Latychevskaia, Tatiana; Fink, Hans-Werner
2013-03-25
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
NASA Astrophysics Data System (ADS)
Sokol, Zbyněk; Mejsnar, Jan; Pop, Lukáš; Bližňák, Vojtěch
2017-09-01
A new method for the probabilistic nowcasting of instantaneous rain rates (ENS) based on the ensemble technique and extrapolation along Lagrangian trajectories of current radar reflectivity is presented. Assuming inaccurate forecasts of the trajectories, an ensemble of precipitation forecasts is calculated and used to estimate the probability that rain rates will exceed a given threshold in a given grid point. Although the extrapolation neglects the growth and decay of precipitation, their impact on the probability forecast is taken into account by the calibration of forecasts using the reliability component of the Brier score (BS). ENS forecasts the probability that the rain rates will exceed thresholds of 0.1, 1.0 and 3.0 mm/h in squares of 3 km by 3 km. The lead times were up to 60 min, and the forecast accuracy was measured by the BS. The ENS forecasts were compared with two other methods: combined method (COM) and neighbourhood method (NEI). NEI considered the extrapolated values in the square neighbourhood of 5 by 5 grid points of the point of interest as ensemble members, and the COM ensemble was comprised of united ensemble members of ENS and NEI. The results showed that the calibration technique significantly improves bias of the probability forecasts by including additional uncertainties that correspond to neglected processes during the extrapolation. In addition, the calibration can also be used for finding the limits of maximum lead times for which the forecasting method is useful. We found that ENS is useful for lead times up to 60 min for thresholds of 0.1 and 1 mm/h and approximately 30 to 40 min for a threshold of 3 mm/h. We also found that a reasonable size of the ensemble is 100 members, which provided better scores than ensembles with 10, 25 and 50 members. In terms of the BS, the best results were obtained by ENS and COM, which are comparable. However, ENS is better calibrated and thus preferable.
Loucas, Bradford D.; Shuryak, Igor; Cornforth, Michael N.
2016-01-01
Whole-chromosome painting (WCP) typically involves the fluorescent staining of a small number of chromosomes. Consequently, it is capable of detecting only a fraction of exchanges that occur among the full complement of chromosomes in a genome. Mathematical corrections are commonly applied to WCP data in order to extrapolate the frequency of exchanges occurring in the entire genome [whole-genome equivalency (WGE)]. However, the reliability of WCP to WGE extrapolations depends on underlying assumptions whose conditions are seldom met in actual experimental situations, in particular the presumed absence of complex exchanges. Using multi-fluor fluorescence in situ hybridization (mFISH), we analyzed the induction of simple exchanges produced by graded doses of 137Cs gamma rays (0–4 Gy), and also 1.1 GeV 56Fe ions (0–1.5 Gy). In order to represent cytogenetic damage as it would have appeared to the observer following standard three-color WCP, all mFISH information pertaining to exchanges that did not specifically involve chromosomes 1, 2, or 4 was ignored. This allowed us to reconstruct dose–responses for three-color apparently simple (AS) exchanges. Using extrapolation methods similar to those derived elsewhere, these were expressed in terms of WGE for comparison to mFISH data. Based on AS events, the extrapolated frequencies systematically overestimated those actually observed by mFISH. For gamma rays, these errors were practically independent of dose. When constrained to a relatively narrow range of doses, the WGE corrections applied to both 56Fe and gamma rays predicted genome-equivalent damage with a level of accuracy likely sufficient for most applications. However, the apparent accuracy associated with WCP to WGE corrections is both fortuitous and misleading. This is because (in normal practice) such corrections can only be applied to AS exchanges, which are known to include complex aberrations in the form of pseudosimple exchanges. When WCP to WGE corrections are applied to true simple exchanges, the results are less than satisfactory, leading to extrapolated values that underestimate the true WGE response by unacceptably large margins. Likely explanations for these results are discussed, as well as their implications for radiation protection. Thus, in seeming contradiction to notion that complex aberrations be avoided altogether in WGE corrections – and in violation of assumptions upon which these corrections are based – their inadvertent inclusion in three-color WCP data is actually required in order for them to yield even marginally acceptable results. PMID:27014627
Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E
2017-12-01
Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.
Smooth extrapolation of unknown anatomy via statistical shape models
NASA Astrophysics Data System (ADS)
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Solution of the finite Milne problem in stochastic media with RVT Technique
NASA Astrophysics Data System (ADS)
Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.
2017-12-01
This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.
The effect of PeakForce tapping mode AFM imaging on the apparent shape of surface nanobubbles.
Walczyk, Wiktoria; Schön, Peter M; Schönherr, Holger
2013-05-08
Until now, TM AFM (tapping mode or intermittent contact mode atomic force microscopy) has been the most often applied direct imaging technique to analyze surface nanobubbles at the solid-aqueous interface. While the presence and number density of nanobubbles can be unequivocally detected and estimated, it remains unclear how much the a priori invasive nature of AFM affects the apparent shapes and dimensions of the nanobubbles. To be able to successfully address the unsolved questions in this field, the accurate knowledge of the nanobubbles' dimensions, radii of curvature etc is necessary. In this contribution we present a comparative study of surface nanobubbles on HOPG (highly oriented pyrolytic graphite) in water acquired with (i) TM AFM and (ii) the recently introduced PFT (PeakForce tapping) mode, in which the force exerted on the nanobubbles rather than the amplitude of the resonating cantilever is used as the AFM feedback parameter during imaging. In particular, we analyzed how the apparent size and shape of nanobubbles depend on the maximum applied force in PFT AFM. Even for forces as small as 73 pN, the nanobubbles appeared smaller than their true size, which was estimated from an extrapolation of the bubble height to zero applied force. In addition, the size underestimation was found to be more pronounced for larger bubbles. The extrapolated true nanoscopic contact angles for nanobubbles on HOPG, measured in PFT AFM, ranged from 145° to 175° and were only slightly underestimated by scanning with non-zero forces. This result was comparable to the nanoscopic contact angles of 160°-175° measured using TM AFM in the same set of experiments. Both values disagree, in accordance with the literature, with the macroscopic contact angle of water on HOPG, measured here to be 63° ± 2°.
Hao, Zisu; Malyala, Divya; Dean, Lisa; Ducoste, Joel
2017-04-01
Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DWET) was developed to simultaneously measure LCFFAs and FOG concentrations in oily wastewater suspensions. This method is based on the analysis of the Attenuated Total Reflectance-Fourier transform infrared spectroscopy (ATR-FTIR) spectrum, in which the absorbance of carboxyl bond (1710cm -1 ) and triglyceride bond (1745cm -1 ) were selected as the characteristic wavenumbers for total LCFFAs and FOG, respectively. A series of experiments using pure organic samples (Oleic acid/Palmitic acid in Canola oil) were performed that showed a linear relationship between the absorption at these two wavenumbers and the total LCFFA. In addition, the DWET method was validated using GC analyses, which displayed a high degree of agreement between the two methods for simulated oily wastewater suspensions (1-35% Oleic acid in Canola oil/Peanut oil). The average determination error of the DWET approach was ~5% when the LCFFA fraction was above 10wt%, indicating that the DWET could be applied as an experimental method for the determination of both LCFFAs and FOG concentrations in oily wastewater suspensions. Potential applications of this DWET approach includes: (1) monitoring the LCFFAs and FOG concentrations in grease interceptor (GI) effluents for regulatory compliance; (2) evaluating alternative LCFFAs/FOG removal technologies; and (3) quantifying potential FOG deposit high accumulation zones in the sewer collection system. Published by Elsevier B.V.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perko, Z; Bortfeld, T; Hong, T
Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of themore » spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot necessarily be applied to other modalities with drastically different dose distributions.« less
Using an external gating signal to estimate noise in PET with an emphasis on tracer avid tumors
NASA Astrophysics Data System (ADS)
Schmidtlein, C. R.; Beattie, B. J.; Bailey, D. L.; Akhurst, T. J.; Wang, W.; Gönen, M.; Kirov, A. S.; Humm, J. L.
2010-10-01
The purpose of this study is to establish and validate a methodology for estimating the standard deviation of voxels with large activity concentrations within a PET image using replicate imaging that is immediately available for use in the clinic. To do this, ensembles of voxels in the averaged replicate images were compared to the corresponding ensembles in images derived from summed sinograms. In addition, the replicate imaging noise estimate was compared to a noise estimate based on an ensemble of voxels within a region. To make this comparison two phantoms were used. The first phantom was a seven-chamber phantom constructed of 1 liter plastic bottles. Each chamber of this phantom was filled with a different activity concentration relative to the lowest activity concentration with ratios of 1:1, 1:1, 2:1, 2:1, 4:1, 8:1 and 16:1. The second phantom was a GE Well-Counter phantom. These phantoms were imaged and reconstructed on a GE DSTE PET/CT scanner with 2D and 3D reprojection filtered backprojection (FBP), and with 2D- and 3D-ordered subset expectation maximization (OSEM). A series of tests were applied to the resulting images that showed that the region and replicate imaging methods for estimating standard deviation were equivalent for backprojection reconstructions. Furthermore, the noise properties of the FBP algorithms allowed scaling the replicate estimates of the standard deviation by a factor of 1/\\sqrt{N}, where N is the number of replicate images, to obtain the standard deviation of the full data image. This was not the case for OSEM image reconstruction. Due to nonlinearity of the OSEM algorithm, the noise is shown to be both position and activity concentration dependent in such a way that no simple scaling factor can be used to extrapolate noise as a function of counts. The use of the Well-Counter phantom contributed to the development of a heuristic extrapolation of the noise as a function of radius in FBP. In addition, the signal-to-noise ratio for high uptake objects was confirmed to be higher with backprojection image reconstruction methods. These techniques were applied to several patient data sets acquired in either 2D or 3D mode, with 18F (FLT and FDG). Images of the standard deviation and signal-to-noise ratios were constructed and the standard deviations of the tumors' uptake were determined. Finally, a radial noise extrapolation relationship deduced in this paper was applied to patient data.
NASA Astrophysics Data System (ADS)
Mitchell, G. A.; Gharib, J. J.; Doolittle, D. F.
2015-12-01
Methane gas flux from the seafloor to atmosphere is an important variable for global carbon cycle and climate models, yet is poorly constrained. Methodologies used to estimate seafloor gas flux commonly employ a combination of acoustic and optical techniques. These techniques often use hull-mounted multibeam echosounders (MBES) to quickly ensonify large volumes of the water column for acoustic backscatter anomalies indicative of gas bubble plumes. Detection of these water column anomalies with a MBES provides information on the lateral distribution of the plumes, the midwater dimensions of the plumes, and their positions on the seafloor. Seafloor plume locations are targeted for visual investigations using a remotely operated vehicle (ROV) to determine bubble emission rates, venting behaviors, bubble sizes, and ascent velocities. Once these variables are measured in-situ, an extrapolation of gas flux is made over the survey area using the number of remotely-mapped flares. This methodology was applied to a geophysical survey conducted in 2013 over a large seafloor crater that developed in response to an oil well blowout in 1983 offshore Papua New Guinea. The site was investigated by multibeam and sidescan mapping, sub-bottom profiling, 2-D high-resolution multi-channel seismic reflection, and ROV video and coring operations. Numerous water column plumes were detected in the data suggesting vigorously active vents within and near the seafloor crater (Figure 1). This study uses dual-frequency MBES datasets (Reson 7125, 200/400 kHz) and ROV video imagery of the active hydrocarbon seeps to estimate total gas flux from the crater. Plumes of bubbles were extracted from the water column data using threshold filtering techniques. Analysis of video images of the seep emission sites within the crater provided estimates on bubble size, expulsion frequency, and ascent velocity. The average gas flux characteristics made from ROV video observations is extrapolated over the number of individual flares detected acoustically and extracted to estimate gas flux from the survey area. The gas flux estimate from the water column filtering and ROV observations yields a range of 2.2 - 6.6 mol CH4 / min.
Li, Zenghui; Xu, Bin; Yang, Jian; Song, Jianshe
2015-01-01
This paper focuses on suppressing spectral overlap for sub-band spectral estimation, with which we can greatly decrease the computational complexity of existing spectral estimation algorithms, such as nonlinear least squares spectral analysis and non-quadratic regularized sparse representation. Firstly, our study shows that the nominal ability of the high-order analysis filter to suppress spectral overlap is greatly weakened when filtering a finite-length sequence, because many meaningless zeros are used as samples in convolution operations. Next, an extrapolation-based filtering strategy is proposed to produce a series of estimates as the substitutions of the zeros and to recover the suppression ability. Meanwhile, a steady-state Kalman predictor is applied to perform a linearly-optimal extrapolation. Finally, several typical methods for spectral analysis are applied to demonstrate the effectiveness of the proposed strategy. PMID:25609038
Estimating Spectra from Photometry
NASA Astrophysics Data System (ADS)
Kalmbach, J. Bryce; Connolly, Andrew J.
2017-12-01
Measuring the physical properties of galaxies such as redshift frequently requires the use of spectral energy distributions (SEDs). SED template sets are, however, often small in number and cover limited portions of photometric color space. Here we present a new method to estimate SEDs as a function of color from a small training set of template SEDs. We first cover the mathematical background behind the technique before demonstrating our ability to reconstruct spectra based upon colors and then compare our results to other common interpolation and extrapolation methods. When the photometric filters and spectra overlap, we show that the error in the estimated spectra is reduced by more than 65% compared to the more commonly used techniques. We also show an expansion of the method to wavelengths beyond the range of the photometric filters. Finally, we demonstrate the usefulness of our technique by generating 50 additional SED templates from an original set of 10 and by applying the new set to photometric redshift estimation. We are able to reduce the photometric redshifts standard deviation by at least 22.0% and the outlier rejected bias by over 86.2% compared to original set for z ≤ 3.
Infrared length scale and extrapolations for the no-core shell model
Wendt, K. A.; Forssén, C.; Papenbrock, T.; ...
2015-06-03
In this paper, we precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the A-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of A nucleons in the NCSM space to that of A nucleons in a 3(A-1)-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for 6Li. We apply our result and perform accurate IR extrapolations for bound statesmore » of 4He, 6He, 6Li, and 7Li. Finally, we also attempt to extrapolate NCSM results for 10B and 16O with bare interactions from chiral effective field theory over tens of MeV.« less
Power maps and wavefront for progressive addition lenses in eyeglass frames.
Mejía, Yobani; Mora, David A; Díaz, Daniel E
2014-10-01
To evaluate a method for measuring the cylinder, sphere, and wavefront of progressive addition lenses (PALs) in eyeglass frames. We examine the contour maps of cylinder, sphere, and wavefront of a PAL assembled in an eyeglass frame using an optical system based on a Hartmann test. To reduce the data noise, particularly in the border of the eyeglass frame, we implement a method based on the Fourier analysis to extrapolate spots outside the eyeglass frame. The spots are extrapolated up to a circular pupil that circumscribes the eyeglass frame and compared with data obtained from a circular uncut PAL. By using the Fourier analysis to extrapolate spots outside the eyeglass frame, we can remove the edge artifacts of the PAL within its frame and implement the modal method to fit wavefront data with Zernike polynomials within a circular aperture that circumscribes the frame. The extrapolated modal maps from framed PALs accurately reflect maps obtained from uncut PALs and provide smoothed maps for the cylinder and sphere inside the eyeglass frame. The proposed method for extrapolating spots outside the eyeglass frame removes edge artifacts of the contour maps (wavefront, cylinder, and sphere), which may be useful to facilitate measurements such as the length and width of the progressive corridor for a PAL in its frame. The method can be applied to any shape of eyeglass frame.
26 CFR 1.263A-7 - Changing a method of accounting under section 263A.
Code of Federal Regulations, 2010 CFR
2010-04-01
... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...
26 CFR 1.263A-7 - Changing a method of accounting under section 263A.
Code of Federal Regulations, 2011 CFR
2011-04-01
... extrapolation, rather than based on the facts and circumstances of a particular year's data. All three methods... analyze the production and resale data for that particular year and apply the rules and principles of... books and records, actual financial and accounting data which is required to apply the capitalization...
Regularization with numerical extrapolation for finite and UV-divergent multi-loop integrals
NASA Astrophysics Data System (ADS)
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Kapenga, J.; Olagbemi, O.
2018-03-01
We give numerical integration results for Feynman loop diagrams such as those covered by Laporta (2000) and by Baikov and Chetyrkin (2010), and which may give rise to loop integrals with UV singularities. We explore automatic adaptive integration using multivariate techniques from the PARINT package for multivariate integration, as well as iterated integration with programs from the QUADPACK package, and a trapezoidal method based on a double exponential transformation. PARINT is layered over MPI (Message Passing Interface), and incorporates advanced parallel/distributed techniques including load balancing among processes that may be distributed over a cluster or a network/grid of nodes. Results are included for 2-loop vertex and box diagrams and for sets of 2-, 3- and 4-loop self-energy diagrams with or without UV terms. Numerical regularization of integrals with singular terms is achieved by linear and non-linear extrapolation methods.
NASA Technical Reports Server (NTRS)
Stimpert, D. L.
1975-01-01
Tests of a twenty inch diameter, low tip speed, low pressure ratio fan which investigated aft fan noise reduction techniques are reported. The 1/3 octave band sound data are presented for all the configurations tested. The model data are presented on 17 foot arc and extrapolated to 200 foot sideline.
Fourth order scheme for wavelet based solution of Black-Scholes equation
NASA Astrophysics Data System (ADS)
Finěk, Václav
2017-12-01
The present paper is devoted to the numerical solution of the Black-Scholes equation for pricing European options. We apply the Crank-Nicolson scheme with Richardson extrapolation for time discretization and Hermite cubic spline wavelets with four vanishing moments for space discretization. This scheme is the fourth order accurate both in time and in space. Computational results indicate that the Crank-Nicolson scheme with Richardson extrapolation significantly decreases the amount of computational work. We also numerically show that optimal convergence rate for the used scheme is obtained without using startup procedure despite the data irregularities in the model.
Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis.
Cohnstaedt, Lee W; Rochon, Kateryn; Duehl, Adrian J; Anderson, John F; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C; Obenauer, Peter J; Campbell, James F; Lysyk, Tim J; Allan, Sandra A
2012-03-01
Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium "Advancements in arthropod monitoring technology, techniques, and analysis" presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles.
Arthropod Surveillance Programs: Basic Components, Strategies, and Analysis
Rochon, Kateryn; Duehl, Adrian J.; Anderson, John F.; Barrera, Roberto; Su, Nan-Yao; Gerry, Alec C.; Obenauer, Peter J.; Campbell, James F.; Lysyk, Tim J.; Allan, Sandra A.
2015-01-01
Effective entomological surveillance planning stresses a careful consideration of methodology, trapping technologies, and analysis techniques. Herein, the basic principles and technological components of arthropod surveillance plans are described, as promoted in the symposium “Advancements in arthropod monitoring technology, techniques, and analysis” presented at the 58th annual meeting of the Entomological Society of America in San Diego, CA. Interdisciplinary examples of arthropod monitoring for urban, medical, and veterinary applications are reviewed. Arthropod surveillance consists of the three components: 1) sampling method, 2) trap technology, and 3) analysis technique. A sampling method consists of selecting the best device or collection technique for a specific location and sampling at the proper spatial distribution, optimal duration, and frequency to achieve the surveillance objective. Optimized sampling methods are discussed for several mosquito species (Diptera: Culicidae) and ticks (Acari: Ixodidae). The advantages and limitations of novel terrestrial and aerial insect traps, artificial pheromones and kairomones are presented for the capture of red flour beetle (Coleoptera: Tenebrionidae), small hive beetle (Coleoptera: Nitidulidae), bed bugs (Hemiptera: Cimicidae), and Culicoides (Diptera: Ceratopogonidae) respectively. After sampling, extrapolating real world population numbers from trap capture data are possible with the appropriate analysis techniques. Examples of this extrapolation and action thresholds are given for termites (Isoptera: Rhinotermitidae) and red flour beetles. PMID:26543242
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
NASA Astrophysics Data System (ADS)
Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong
2018-04-01
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.
Jurgenson, E. D.; Maris, P.; Furnstahl, R. J.; ...
2013-05-13
The similarity renormalization group (SRG) is used to soften interactions for ab initio nuclear structure calculations by decoupling low- and high-energy Hamiltonian matrix elements. The substantial contribution of both initial and SRG-induced three-nucleon forces requires their consistent evolution in a three-particle basis space before applying them to larger nuclei. While, in principle, the evolved Hamiltonians are unitarily equivalent, in practice the need for basis truncation introduces deviations, which must be monitored. Here we present benchmark no-core full configuration calculations with SRG-evolved interactions in p-shell nuclei over a wide range of softening. As a result, these calculations are used to assessmore » convergence properties, extrapolation techniques, and the dependence of energies, including four-body contributions, on the SRG resolution scale.« less
On the numerical computation of nonlinear force-free magnetic fields. [from solar photosphere
NASA Technical Reports Server (NTRS)
Wu, S. T.; Sun, M. T.; Chang, H. M.; Hagyard, M. J.; Gary, G. A.
1990-01-01
An algorithm has been developed to extrapolate nonlinear force-free magnetic fields from the photosphere, given the proper boundary conditions. This paper presents the results of this work, describing the mathematical formalism that was developed, the numerical techniques employed, and comments on the stability criteria and accuracy developed for these numerical schemes. An analytical solution is used for a benchmark test; the results show that the computational accuracy for the case of a nonlinear force-free magnetic field was on the order of a few percent (less than 5 percent). This newly developed scheme was applied to analyze a solar vector magnetogram, and the results were compared with the results deduced from the classical potential field method. The comparison shows that additional physical features of the vector magnetogram were revealed in the nonlinear force-free case.
New computational tools for H/D determination in macromolecular structures from neutron data.
Siliqi, Dritan; Caliandro, Rocco; Carrozzini, Benedetta; Cascarano, Giovanni Luca; Mazzone, Annamaria
2010-11-01
Two new computational methods dedicated to neutron crystallography, called n-FreeLunch and DNDM-NDM, have been developed and successfully tested. The aim in developing these methods is to determine hydrogen and deuterium positions in macromolecular structures by using information from neutron density maps. Of particular interest is resolving cases in which the geometrically predicted hydrogen or deuterium positions are ambiguous. The methods are an evolution of approaches that are already applied in X-ray crystallography: extrapolation beyond the observed resolution (known as the FreeLunch procedure) and a difference electron-density modification (DEDM) technique combined with the electron-density modification (EDM) tool (known as DEDM-EDM). It is shown that the two methods are complementary to each other and are effective in finding the positions of H and D atoms in neutron density maps.
Goldstein, Darlene R
2006-10-01
Studies of gene expression using high-density short oligonucleotide arrays have become a standard in a variety of biological contexts. Of the expression measures that have been proposed to quantify expression in these arrays, multi-chip-based measures have been shown to perform well. As gene expression studies increase in size, however, utilizing multi-chip expression measures is more challenging in terms of computing memory requirements and time. A strategic alternative to exact multi-chip quantification on a full large chip set is to approximate expression values based on subsets of chips. This paper introduces an extrapolation method, Extrapolation Averaging (EA), and a resampling method, Partition Resampling (PR), to approximate expression in large studies. An examination of properties indicates that subset-based methods can perform well compared with exact expression quantification. The focus is on short oligonucleotide chips, but the same ideas apply equally well to any array type for which expression is quantified using an entire set of arrays, rather than for only a single array at a time. Software implementing Partition Resampling and Extrapolation Averaging is under development as an R package for the BioConductor project.
Brennan, Scott F; Cresswell, Andrew G; Farris, Dominic J; Lichtwark, Glen A
2017-11-07
Ultrasonography is a useful technique to study muscle contractions in vivo, however larger muscles like vastus lateralis may be difficult to visualise with smaller, commonly used transducers. Fascicle length is often estimated using linear trigonometry to extrapolate fascicle length to regions where the fascicle is not visible. However, this approach has not been compared to measurements made with a larger field of view for dynamic muscle contractions. Here we compared two different single-transducer extrapolation methods to measure VL muscle fascicle length to a direct measurement made using two synchronised, in-series transducers. The first method used pennation angle and muscle thickness to extrapolate fascicle length outside the image (extrapolate method). The second method determined fascicle length based on the extrapolated intercept between a fascicle and the aponeurosis (intercept method). Nine participants performed maximal effort, isometric, knee extension contractions on a dynamometer at 10° increments from 50 to 100° of knee flexion. Fascicle length and torque were simultaneously recorded for offline analysis. The dual transducer method showed similar patterns of fascicle length change (overall mean coefficient of multiple correlation was 0.76 and 0.71 compared to extrapolate and intercept methods respectively), but reached different absolute lengths during the contractions. This had the effect of producing force-length curves of the same shape, but each curve was shifted in terms of absolute length. We concluded that dual transducers are beneficial for studies that examine absolute fascicle lengths, whereas either of the single transducer methods may produce similar results for normalised length changes, and repeated measures experimental designs. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kittel, Christoph; Lang, Charlotte; Agosta, Cécile; Prignon, Maxime; Fettweis, Xavier; Erpicum, Michel
2016-04-01
This study presents surface mass balance (SMB) results at 5 km resolution with the regional climate MAR model over the Greenland ice sheet. Here, we use the last MAR version (v3.6) where the land-ice module (SISVAT) using a high resolution grid (5km) for surface variables is fully coupled while the MAR atmospheric module running at a lower resolution of 10km. This online downscaling technique enables to correct near-surface temperature and humidity from MAR by a gradient based on elevation before forcing SISVAT. The 10 km precipitation is not corrected. Corrections are stronger over the ablation zone where topography presents more variations. The model has been force by ERA-Interim between 1979 and 2014. We will show the advantages of using an online SMB downscaling technique in respect to an offline downscaling extrapolation based on local SMB vertical gradients. Results at 5 km show a better agreement with the PROMICE surface mass balance data base than the extrapolated 10 km MAR SMB results.
Planar Laser-Induced Iodine Fluorescence Measurements in Rarefied Hypersonic Flow
NASA Technical Reports Server (NTRS)
Cecil, Eric; McDaniel, James C.
2005-01-01
A planar laser-induced fluorescence (PLIF) technique is discussed and applied to measurement of time-averaged values of velocity and temperature in an I(sub 2)-seeded N(sub 2) hypersonic free jet facility. Using this technique, a low temperature, non-reacting, hypersonic flow over a simplified model of a reaction control system (RCS) was investigated. Data are presented of rarefied Mach 12 flow over a sharp leading edge flat plate at zero incidence, both with and without an interacting jet issuing from a nozzle built into the plate. The velocity profile in the boundary layer on the plate was resolved. The slip velocity along the plate, extrapolated from the velocity profile data, varied from nearly 100% down to 10% of the freestream value. These measurements are compared with results of a DSMC solution. The velocity variation along the centerline of a jet issuing from the plate was measured and found to match closely with the correlation of Ashkenas and Sherman. The velocity variation in the oblique shock terminating the jet was resolved sufficiently to measure the shock wave thickness.
NASA Astrophysics Data System (ADS)
Aydın, Talat
2015-09-01
ESR (electron spin resonance) techniques were applied for detection and original dose estimation to radiation-processed egg powders. The un-irradiated (control) egg powders showed a single resonance line centered at g=2.0086±0.0005, 2.0081±0.0005, 2.0082±0.0005 (native signal) for yolk, white and whole egg, respectively. Irradiation induced at least one additional intense singlet overlapping to the control signal and caused a significant increase in signal intensity without any changes in spectral patterns. Responses of egg powders to different gamma radiation doses in the range 0-10 kGy were examined. The stability of the radiation-induced ESR signal of irradiated egg powders were investigated over a storage period of about 5 months. Additive reirradiation of the egg powders produces a reproducible dose response function, which can be used to assess the initial dose by back-extrapolation. The additive dose method gives an estimation of the original dose within ±12% at the end of the 720 h storage period.
Analysing 21cm signal with artificial neural network
NASA Astrophysics Data System (ADS)
Shimabukuro, Hayato; a Semelin, Benoit
2018-05-01
The 21cm signal at epoch of reionization (EoR) should be observed within next decade. We expect that cosmic 21cm signal at the EoR provides us both cosmological and astrophysical information. In order to extract fruitful information from observation data, we need to develop inversion method. For such a method, we introduce artificial neural network (ANN) which is one of the machine learning techniques. We apply the ANN to inversion problem to constrain astrophysical parameters from 21cm power spectrum. We train the architecture of the neural network with 70 training datasets and apply it to 54 test datasets with different value of parameters. We find that the quality of the parameter reconstruction depends on the sensitivity of the power spectrum to the different parameter sets at a given redshift and also find that the accuracy of reconstruction is improved by increasing the number of given redshifts. We conclude that the ANN is viable inversion method whose main strength is that they require a sparse extrapolation of the parameter space and thus should be usable with full simulation.
NASA Astrophysics Data System (ADS)
Lee, Y.; Bescond, M.; Logoteta, D.; Cavassilas, N.; Lannoo, M.; Luisier, M.
2018-05-01
We propose an efficient method to quantum mechanically treat anharmonic interactions in the atomistic nonequilibrium Green's function simulation of phonon transport. We demonstrate that the so-called lowest-order approximation, implemented through a rescaling technique and analytically continued by means of the Padé approximants, can be used to accurately model third-order anharmonic effects. Although the paper focuses on a specific self-energy, the method is applicable to a very wide class of physical interactions. We apply this approach to the simulation of anharmonic phonon transport in realistic Si and Ge nanowires with uniform or discontinuous cross sections. The effect of increasing the temperature above 300 K is also investigated. In all the considered cases, we are able to obtain a good agreement with the routinely adopted self-consistent Born approximation, at a remarkably lower computational cost. In the more complicated case of high temperatures (≫300 K), we find that the first-order Richardson extrapolation applied to the sequence of the Padé approximants N -1 /N results in a significant acceleration of the convergence.
Measurements of the Absorption by Auditorium SEATING—A Model Study
NASA Astrophysics Data System (ADS)
BARRON, M.; COLEMAN, S.
2001-01-01
One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.
Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan.
Lu, Xueqiang
2004-01-01
Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method.
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
X-ray surface dose measurements using TLD extrapolation.
Kron, T; Elliot, A; Wong, T; Showell, G; Clubb, B; Metcalfe, P
1993-01-01
Surface dose measurements in therapeutic x-ray beams are of importance in determining the dose to the skin of patients undergoing radiotherapy. Measurements were performed in the 6-MV beam of a medical linear accelerator with LiF thermoluminescence dosimeters (TLD) using a solid water phantom. TLD chips (surface area 3.17 x 3.17 cm2) of three different thicknesses (0.230, 0.099, and 0.038 g/cm2) were used to extrapolate dose readings to an infinitesimally thin layer of LiF. This surface dose was measured for field sizes ranging from 1 x 1 cm2 to 40 x 40 cm2. The surface dose relative to maximum dose was found to be 10.0% for a field size of 5 x 5 cm2, 16.3% for 10 x 10 cm2, and 26.9% for 20 x 20 cm2. Using a 6-mm Perspex block tray in the beam increased the surface dose in these fields to 10.7%, 17.7%, and 34.2% respectively. Due to the small size of the TLD chips, TLD extrapolation is applicable also for intracavity and exit dose determinations. The technique used for in vivo dosimetry could provide clinicians information about the build up of dose up to 1-mm depth in addition to an extrapolated surface dose measurement.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...
This study focuses on the application of electrochemical approaches to drinking water copper corrosion problems. Applying electrochemical approaches combined with copper solubility measurements, and solid surface analysis approaches were discussed. Tafel extrapolation and Electro...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Latychevskaia, Tatiana; Fink, Hans-Werner
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission functionmore » of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.« less
Guided wave tomography in anisotropic media using recursive extrapolation operators
NASA Astrophysics Data System (ADS)
Volker, Arno
2018-04-01
Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.
Buryak, Ilya; Lokshtanov, Sergei; Vigasin, Andrey
2012-09-21
The present work aims at ab initio characterization of the integrated intensity temperature variation of collision-induced absorption (CIA) in N(2)-H(2)(D(2)). Global fits of potential energy surface (PES) and induced dipole moment surface (IDS) were made on the basis of CCSD(T) (coupled cluster with single and double and perturbative triple excitations) calculations with aug-cc-pV(T,Q)Z basis sets. Basis set superposition error correction and extrapolation to complete basis set (CBS) limit techniques were applied to both energy and dipole moment. Classical second cross virial coefficient calculations accounting for the first quantum correction were employed to prove the quality of the obtained PES. The CIA temperature dependence was found in satisfactory agreement with available experimental data.
NASA Technical Reports Server (NTRS)
Zapata, R. N.; Humphris, R. R.; Henderson, K. C.
1975-01-01
The unique design and operational characteristics of a prototype magnetic suspension and balance facility which utilizes superconductor technology are described and discussed from the point of view of scalability to large sizes. The successful experimental demonstration of the feasibility of this new magnetic suspension concept of the University of Virginia, together with the success of the cryogenic wind-tunnel concept developed at Langley Research Center, appear to have finally opened the way to clean-tunnel, high-Re aerodynamic testing. Results of calculations corresponding to a two-step design extrapolation from the observed performance of the prototype magnetic suspension system to a system compatible with the projected cryogenic transonic research tunnel are presented to give an order-of-magnitude estimate of expected performance characteristics. Research areas where progress should lead to improved design and performance of large facilities are discussed.
Image enhancement by non-linear extrapolation in frequency space
NASA Technical Reports Server (NTRS)
Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)
1998-01-01
An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.
Supersonic civil airplane study and design: Performance and sonic boom
NASA Technical Reports Server (NTRS)
Cheung, Samson
1995-01-01
Since aircraft configuration plays an important role in aerodynamic performance and sonic boom shape, the configuration of the next generation supersonic civil transport has to be tailored to meet high aerodynamic performance and low sonic boom requirements. Computational fluid dynamics (CFD) can be used to design airplanes to meet these dual objectives. The work and results in this report are used to support NASA's High Speed Research Program (HSRP). CFD tools and techniques have been developed for general usages of sonic boom propagation study and aerodynamic design. Parallel to the research effort on sonic boom extrapolation, CFD flow solvers have been coupled with a numeric optimization tool to form a design package for aircraft configuration. This CFD optimization package has been applied to configuration design on a low-boom concept and an oblique all-wing concept. A nonlinear unconstrained optimizer for Parallel Virtual Machine has been developed for aerodynamic design and study.
Error Mitigation for Short-Depth Quantum Circuits
NASA Astrophysics Data System (ADS)
Temme, Kristan; Bravyi, Sergey; Gambetta, Jay M.
2017-11-01
Two schemes are presented that mitigate the effect of errors and decoherence in short-depth quantum circuits. The size of the circuits for which these techniques can be applied is limited by the rate at which the errors in the computation are introduced. Near-term applications of early quantum devices, such as quantum simulations, rely on accurate estimates of expectation values to become relevant. Decoherence and gate errors lead to wrong estimates of the expectation values of observables used to evaluate the noisy circuit. The two schemes we discuss are deliberately simple and do not require additional qubit resources, so to be as practically relevant in current experiments as possible. The first method, extrapolation to the zero noise limit, subsequently cancels powers of the noise perturbations by an application of Richardson's deferred approach to the limit. The second method cancels errors by resampling randomized circuits according to a quasiprobability distribution.
NASA Astrophysics Data System (ADS)
Deng, Chengbin; Wu, Changshan
2013-12-01
Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.
Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre
2016-01-15
Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lorenz, Alyson; Dhingra, Radhika; Chang, Howard H; Bisanzio, Donal; Liu, Yang; Remais, Justin V
2014-01-01
Extrapolating landscape regression models for use in assessing vector-borne disease risk and other applications requires thoughtful evaluation of fundamental model choice issues. To examine implications of such choices, an analysis was conducted to explore the extent to which disparate landscape models agree in their epidemiological and entomological risk predictions when extrapolated to new regions. Agreement between six literature-drawn landscape models was examined by comparing predicted county-level distributions of either Lyme disease or Ixodes scapularis vector using Spearman ranked correlation. AUC analyses and multinomial logistic regression were used to assess the ability of these extrapolated landscape models to predict observed national data. Three models based on measures of vegetation, habitat patch characteristics, and herbaceous landcover emerged as effective predictors of observed disease and vector distribution. An ensemble model containing these three models improved precision and predictive ability over individual models. A priori assessment of qualitative model characteristics effectively identified models that subsequently emerged as better predictors in quantitative analysis. Both a methodology for quantitative model comparison and a checklist for qualitative assessment of candidate models for extrapolation are provided; both tools aim to improve collaboration between those producing models and those interested in applying them to new areas and research questions.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
Objective analysis of tidal fields in the Atlantic and Indian Oceans
NASA Technical Reports Server (NTRS)
Sanchez, B. V.; Rao, D. B.; Steenrod, S. D.
1986-01-01
An objective analysis technique has been developed to extrapolate tidal amplitudes and phases over entire ocean basins using existing gauge data and the altimetric measurements which are now beginning to be provided by satellite oceanography. The technique was previously tested in the Lake Superior basin. The method has now been developed and applied in the Atlantic-Indian ocean basins using a 6 deg x 6 deg grid to test its essential features. The functions used in the interpolation are the eigenfunctions of the velocity potential (Proudman functions) which are computed numerically from a knowledge of the basin's bottom topography, the horizontal plan form and the necessary boundary conditions. These functions are characteristic of the particular basin. The gravitational normal modes of the basin are computed as part of the investigation, they are used to obtain the theoretical forced solutions for the tidal constituents, the latter provide the simulated data for the testing of the method and serve as a guide in choosing the most energetic modes for the objective analysis. The results of the objective analysis of the M2 and K1 tidal constituents indicate the possibility of recovering the tidal signal with a degree of accuracy well within the error bounds of present day satellite techniques.
Simulation of Spiral Slot Antennas on Composite Platforms
NASA Technical Reports Server (NTRS)
Volakis, John L.
1996-01-01
The project goals, plan and accomplishments up to this point are summarized in the viewgraphs. Among the various accomplishments, the most important have been: the development of the prismatic finite element code for doubly curved platforms and its validation with many different antenna configurations; the design and fabrication of a new slot spiral antennas suitable for automobile cellular, GPS and PCs communications; the investigation and development of various mesh truncation schemes, including the perfectly matched absorber and various fast integral equation methods; and the introduction of a frequency domain extrapolation technique (AWE) for predicting broadband responses using only a few samples of the response. This report contains several individual reports most of which have been submitted for publication to referred journals. For a report on the frequency extrapolation technique, the reader is referred to the UM Radiation Laboratory report A total of 14 papers have been published or accepted for publication with the full or partial support of this grant. Several more papers are in preparation.
NASA Astrophysics Data System (ADS)
Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël
2017-01-01
In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.
Applying Evolutionary Genetics to Developmental Toxicology and Risk Assessment
Leung, Maxwell C. K.; Procter, Andrew C.; Goldstone, Jared V.; Foox, Jonathan; DeSalle, Robert; Mattingly, Carolyn J.; Siddall, Mark E.; Timme-Laragy, Alicia R.
2018-01-01
Evolutionary thinking continues to challenge our views on health and disease. Yet, there is a communication gap between evolutionary biologists and toxicologists in recognizing the connections among developmental pathways, high-throughput screening, and birth defects in humans. To increase our capability in identifying potential developmental toxicants in humans, we propose to apply evolutionary genetics to improve the experimental design and data interpretation with various in vitro and whole-organism models. We review five molecular systems of stress response and update 18 consensual cell-cell signaling pathways that are the hallmark for early development, organogenesis, and differentiation; and revisit the principles of teratology in light of recent advances in high-throughput screening, big data techniques, and systems toxicology. Multiscale systems modeling plays an integral role in the evolutionary approach to cross-species extrapolation. Phylogenetic analysis and comparative bioinformatics are both valuable tools in identifying and validating the molecular initiating events that account for adverse developmental outcomes in humans. The discordance of susceptibility between test species and humans (ontogeny) reflects their differences in evolutionary history (phylogeny). This synthesis not only can lead to novel applications in developmental toxicity and risk assessment, but also can pave the way for applying an evo-devo perspective to the study of developmental origins of health and disease. PMID:28267574
Simulation of double beta decay in the ''SeXe'' TPC
NASA Astrophysics Data System (ADS)
Mauger, F.
2007-04-01
In 2004, the NEMO collaboration has started some preliminary studies for a next-generation double beta decay experiment: SuperNEMO. The possibility to use a large gaseous TPC has been investigated using simulation and extrapolation of former experiments. In this talk, I report on the reasons why such techniques have not been selected in 2004 and led the NEMO collaboration to reuse the techniques implemented within the NEMO3 detector.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 23 2010-07-01 2010-07-01 false Minor uses. 161.60 Section 161.60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) PESTICIDE PROGRAMS DATA REQUIREMENTS... until it is applied to the major use registrations. (3) EPA will accept extrapolations and regional data...
Lunar terrain mapping and relative-roughness analysis
Rowan, Lawrence C.; McCauley, John F.; Holm, Esther A.
1971-01-01
Terrain maps of the equatorial zone (long 70° E.-70° W. and lat 10° N-10° S.) were prepared at scales of 1:2,000,000 and 1:1,000,000 to classify lunar terrain with respect to roughness and to provide a basis for selecting sites for Surveyor and Apollo landings as well as for Ranger and Lunar Orbiter photographs. The techniques that were developed as a result of this effort can be applied to future planetary exploration. By using the best available earth-based observational data and photographs 1:1,000,000-scale and U.S. Geological Survey lunar geologic maps and U.S. Air Force Aeronautical Chart and Information Center LAC charts, lunar terrain was described by qualitative and quantitative methods and divided into four fundamental classes: maria, terrae, craters, and linear features. Some 35 subdivisions were defined and mapped throughout the equatorial zone, and, in addition, most of the map units were illustrated by photographs. The terrain types were analyzed quantitatively to characterize and order their relative-roughness characteristics. Approximately 150,000 east-west slope measurements made by a photometric technique (photoclinometry) in 51 sample areas indicate that algebraic slope-frequency distributions are Gaussian, and so arithmetic means and standard deviations accurately describe the distribution functions. The algebraic slope-component frequency distributions are particularly useful for rapidly determining relative roughness of terrain. The statistical parameters that best describe relative roughness are the absolute arithmetic mean, the algebraic standard deviation, and the percentage of slope reversal. Statistically derived relative-relief parameters are desirable supplementary measures of relative roughness in the terrae. Extrapolation of relative roughness for the maria was demonstrated using Ranger VII slope-component data and regional maria slope data, as well as the data reported here. It appears that, for some morphologically homogeneous mare areas, relative roughness can be extrapolated to the large scales from measurements at small scales.
Natural Hazards characterisation in industrial practice
NASA Astrophysics Data System (ADS)
Bernardara, Pietro
2017-04-01
The definition of rare hydroclimatic extremes (up to 10-4 annual probability of occurrence) is of the utmost importance for the design of high value industrial infrastructures, such as grids, power plants, offshore platforms. The underestimation as well as the overestimation of the risk may lead to huge costs (ex. mid-life expensive works or overdesign) which may even prevent the project to happen. Nevertheless, the uncertainty associated to the extrapolation towards the rare frequencies are huge and manifold. They are mainly due to the scarcity of observations, the lack of quality on the extreme value records and on the arbitrary choice of the models used for extrapolations. This often put the design engineers in uncomfortable situations when they must choose the design values to use. Providentially, the recent progresses in the earth observation techniques, information technology, historical data collection and weather and ocean modelling are making huge datasets available. A careful use of big datasets of observations and modelled data are leading towards a better understanding of the physics of the underlying phenomena, the complex interactions between them and thus of the extreme events frequency extrapolations. This will move the engineering practice from the single site, small sample, application of statistical analysis to a more spatially coherent, physically driven extrapolation of extreme values. Few examples, from the EDF industrial practice are given to illustrate these progresses and their potential impact on the design approaches.
Modern and Unconventional Approaches to Karst Hydrogeology
NASA Astrophysics Data System (ADS)
Sukop, M. C.
2017-12-01
Karst hydrogeology is frequently approached from a hydrograph/statistical perspective where precipitation/recharge inputs are converted to output hydrographs and the conversion process reflects the hydrology of the system. Karst catchments show hydrological response to short-term meteorological events and to long-term variation of large-scale atmospheric circulation. Modern approaches to analysis of these data include, for example, multiresolution wavelet techniques applied to understand relations between karst discharge and climate fields. Much less effort has been directed towards direct simulation of flow fields and transport phenomena in karst settings. This is primarily due to the lack of information on the detailed physical geometry of most karst systems. New mapping, sampling, and modeling techniques are beginning to enable direct simulation of flow and transport. A Conduit Flow Process (CFP) add-on to the USGS ModFlow model became available in 2007. FEFLOW and similar models are able to represent flows in individual conduits. Lattice Boltzmann models have also been applied to flow modeling in karst systems. Regarding quantitative measurement of karst system geometry, at scales to 0.1 m, X-ray computed tomography enables good detection of detailed (sub-millimeter) pore space in karstic rocks. Three-dimensional printing allows reconstruction of fragile high porosity rocks, and surrogate samples generated this way can then be subjected to laboratory testing. Borehole scales can be accessed with high-resolution ( 0.001 m) Digital Optical Borehole Imaging technologies and can provide virtual samples more representative of the true nature of karst aquifers than can obtained from coring. Subsequent extrapolation of such samples can generate three-dimensional models suitable for direct modeling of flow and transport. Finally, new cave mapping techniques are beginning to provide information than can be applied to direct simulation of flow. Due to flow rates and cave diameter, very high Reynolds number flows may be encountered.
DU Fragment Carcinogenicity: Extrapolation of Findings in Rodents to Man
2004-03-01
changes in intrahepatic cholangiocarcinoma induced by thorotrast. Radiat. Res. 152: S 118-S 124. Khan, J., R. Simon, M. Bittner, Y. Chen, S. B. Leighton, T... cholangiocarcinomas demonstrated with a sensitive polymerase chain reaction technique. Cancer Res. 51: 3497-3502. MacDonald, J. S. and H. H. Scribner (1999
With an interdisciplinary team of scientists from U.S. Government Agencies and Universities, we are utilizing zebrafish and fathead minnow to develop techniques for extrapolation of chemical stressor impacts across species, chemicals and endpoints. The linkage of responses acros...
Video error concealment using block matching and frequency selective extrapolation algorithms
NASA Astrophysics Data System (ADS)
P. K., Rajani; Khaparde, Arti
2017-06-01
Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.
1. Aquatic ecologists use mesocosm experiments to understand mechanisms driving ecological processes. Comparisons across experiments, and extrapolations to larger scales, are complicated by the use of mesocosms with varying dimensions. We conducted a mesocosm experiment over a vo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartlep, T.; Zhao, J.; Kosovichev, A. G.
2013-01-10
The meridional flow in the Sun is an axisymmetric flow that is generally directed poleward at the surface, and is presumed to be of fundamental importance in the generation and transport of magnetic fields. Its true shape and strength, however, are debated. We present a numerical simulation of helioseismic wave propagation in the whole solar interior in the presence of a prescribed, stationary, single-cell, deep meridional circulation serving as synthetic data for helioseismic measurement techniques. A deep-focusing time-distance helioseismology technique is applied to the synthetic data, showing that it can in fact be used to measure the effects of themore » meridional flow very deep in the solar convection zone. It is shown that the ray approximation that is commonly used for interpretation of helioseismology measurements remains a reasonable approximation even for very long distances between 12 Degree-Sign and 42 Degree-Sign corresponding to depths between 52 and 195 Mm. From the measurement noise, we extrapolate that time-resolved observations on the order of a full solar cycle may be needed to probe the flow all the way to the base of the convection zone.« less
Review of Air Vitiation Effects on Scramjet Ignition and Flameholding Combustion Processes
NASA Technical Reports Server (NTRS)
Pellett, G. L.; Bruno, Claudio; Chinitz, W.
2002-01-01
This paper offers a detailed review and analysis of more than 100 papers on the physics and chemistry of scramjet ignition and flameholding combustion processes, and the known effects of air vitiation on these processes. The paper attempts to explain vitiation effects in terms of known chemical kinetics and flame propagation phenomena. Scaling methodology is also examined, and a highly simplified Damkoehler scaling technique based on OH radical production/destruction is developed to extrapolate ground test results, affected by vitiation, to flight testing conditions. The long term goal of this effort is to help provide effective means for extrapolating ground test data to flight, and thus to reduce the time and expense of both ground and flight testing.
NASA Technical Reports Server (NTRS)
Reagan, John A.; Pilewskie, Peter A.; Scott-Fleming, Ian C.; Herman, Benjamin M.; Ben-David, Avishai
1987-01-01
Techniques for extrapolating earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.
NASA Technical Reports Server (NTRS)
Reagan, J. A.; Pilewskie, P. A.; Scott-Fleming, I. C.; Hermann, B. M.
1986-01-01
Techniques for extrapolating Earth-based spectral band measurements of directly transmitted solar irradiance to equivalent exoatmospheric signal levels were used to aid in determining system gain settings of the Halogen Occultation Experiment (HALOE) sunsensor system being developed for the NASA Upper Atmosphere Research Satellite and for the Stratospheric Aerosol and Gas (SAGE) 2 instrument on the Earth Radiation Budget Satellite. A band transmittance approach was employed for the HALOE sunsensor which has a broad-band channel determined by the spectral responsivity of a silicon detector. A modified Langley plot approach, assuming a square-root law behavior for the water vapor transmittance, was used for the SAGE-2 940 nm water vapor channel.
Borehole geophysics applied to ground-water investigations
Keys, W.S.
1990-01-01
The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary background in hydrogeology with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, as well as on changes in the character of these factors over time. The response of well logs is caused by petrophysical factors, by the quality, temperature, and pressure of interstitial fluids, and by ground-water flow. Qualitative and quantitative analysis of analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs. The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids, and wells, as well as the principles of measurement, must be understood if geophysical logs are to be interpreted correctly. Plating a logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology is needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and the log analyst and requires both calibration and well-site standardization of equipment. Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.
Borehole geophysics applied to ground-water investigations
Keys, W.S.
1988-01-01
The purpose of this manual is to provide hydrologists, geologists, and others who have the necessary training with the basic information needed to apply the most useful borehole-geophysical-logging techniques to the solution of problems in ground-water hydrology. Geophysical logs can provide information on the construction of wells and on the character of the rocks and fluids penetrated by those wells, in addition to changes in the character of these factors with time. The response of well logs is caused by: petrophysical factors; the quality; temperature, and pressure of interstitial fluids; and ground-water flow. Qualitative and quantitative analysis of the analog records and computer analysis of digitized logs are used to derive geohydrologic information. This information can then be extrapolated vertically within a well and laterally to other wells using logs.The physical principles by which the mechanical and electronic components of a logging system measure properties of rocks, fluids and wells, and the principles of measurement need to be understood to correctly interpret geophysical logs. Planning the logging operation involves selecting the equipment and the logs most likely to provide the needed information. Information on well construction and geohydrology are needed to guide this selection. Quality control of logs is an important responsibility of both the equipment operator and log analyst and requires both calibration and well-site standardization of equipment.Logging techniques that are widely used in ground-water hydrology or that have significant potential for application to this field include: spontaneous potential, resistance, resistivity, gamma, gamma spectrometry, gamma-gamma, neutron, acoustic velocity, acoustic televiewer, caliper, and fluid temperature, conductivity, and flow. The following topics are discussed for each of these techniques: principles and instrumentation, calibration and standardization, volume of investigation, extraneous effects, and interpretation and applications.
Correlations between chromatographic parameters and bioactivity predictors of potential herbicides.
Janicka, Małgorzata
2014-08-01
Different liquid chromatography techniques, including reversed-phase liquid chromatography on Purosphere RP-18e, IAM.PC.DD2 and Cosmosil Cholester columns and micellar liqud chromatography with a Purosphere RP-8e column and using buffered sodium dodecyl sulfate-acetonitrile as the mobile phase, were applied to study the lipophilic properties of 15 newly synthesized phenoxyacetic and carbamic acid derivatives, which are potential herbicides. Chromatographic lipophilicity descriptors were used to extrapolate log k parameters (log kw and log km) and log k values. Partitioning lipophilicity descriptors, i.e., log P coefficients in an n-octanol-water system, were computed from the molecular structures of the tested compounds. Bioactivity descriptors, including partition coefficients in a water-plant cuticle system and water-human serum albumin and coefficients for human skin partition and permeation were calculated in silico by ACD/ADME software using the linear solvation energy relationship of Abraham. Principal component analysis was applied to describe similarities between various chromatographic and partitioning lipophilicities. Highly significant, predictive linear relationships were found between chromatographic parameters and bioactivity descriptors. © The Author [2013]. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Image sharpening for mixed spatial and spectral resolution satellite systems
NASA Technical Reports Server (NTRS)
Hallada, W. A.; Cox, S.
1983-01-01
Two methods of image sharpening (reconstruction) are compared. The first, a spatial filtering technique, extrapolates edge information from a high spatial resolution panchromatic band at 10 meters and adds it to the low spatial resolution narrow spectral bands. The second method, a color normalizing technique, is based on the ability to separate image hue and brightness components in spectral data. Using both techniques, multispectral images are sharpened from 30, 50, 70, and 90 meter resolutions. Error rates are calculated for the two methods and all sharpened resolutions. The results indicate that the color normalizing method is superior to the spatial filtering technique.
Geophysical evaluation of the Success Dam foundation, Porterville, California
Hunter, L.E.; Powers, M.H.; Haines, S.; Asch, T.; Burton, B.L.; Serafini, D.C.
2006-01-01
Success Dam is a zonedearth fill embankment located near Porterville, CA. Studies of Success Dam by the recent Dam Safety Assurance Program (DSAP) have demonstrated the potential for seismic instability and large deformation of the dam due to relatively low levels of earthquake shaking. The U.S. Army Corps of Engineers conducted several phases of investigations to determine the properties of the dam and its underlying foundation. Detailed engineering studies have been applied using a large number of analytical techniques to estimate the response of the dam and foundation system when subjected to earthquake loading. Although a large amount of data have been acquired, most are 'point' data from borings and results have to be extrapolated between the borings. Geophysical techniques were applied to image the subsurface to provide a better understanding of the spatial distribution of key units that potentially impact the stability. Geophysical investigations employing seismic refraction tomography, direct current (DC) resistivity, audio magnetotellurics (AMT) and self-potential (SP) were conducted across the location of the foundation of a new dam proposed to replace the existing one. Depth to bedrock and the occurrence of beds potentially susceptible to liquefaction were the focus of the investigations. Seismic refraction tomography offers a deep investigation of the foundation region and looks at compressional and shear properties of the material. Whereas resistivity surveys determines conductivity relationships in the shallow subsurface and can produce a relatively high-resolution image of geological units with different electrical properties. AMT was applied because it has the potential to look considerably deeper than the other methods, is useful for confirming depth to bedrock, and can be useful in identifying deep seated faults. SP is a passive electrical method that measures the electrical streaming potential in the subsurface that responds to the movement of ground water. SP surveys were conducted at low pool and high pool conditions in order to look for evidence of seepage below the existing dam. In this paper, we summarize these techniques, present their results at Success Dam, and discuss general application of these techniques for investigating dams and their foundations.
NASA Astrophysics Data System (ADS)
Duijster, Arno; van Groenestijn, Gert-Jan; van Neer, Paul; Blacquière, Gerrit; Volker, Arno
2018-04-01
The use of phased arrays is growing in the non-destructive testing industry and the trend is towards large 2D arrays, but due to limitations, it is currently not possible to record the signals from all elements, resulting in aliased data. In the past, we have presented a data interpolation scheme `beyond spatial aliasing' to overcome this aliasing. In this paper, we present a different approach: blending and deblending of data. On the hardware side, groups of receivers are blended (grouped) in only a few transmit/recording channels. This allows for transmission and recording with all elements, in a shorter acquisition time and with less channels. On the data processing side, this blended data is deblended (separated) by transforming it to a different domain and applying an iterative filtering and thresholding. Two different filtering methods are compared: f-k filtering and wavefield extrapolation filtering. The deblending and filtering methods are demonstrated on simulated experimental data. The wavefield extrapolation filtering proves to outperform f-k filtering. The wavefield extrapolation method can deal with groups of up to 24 receivers, in a phased array of 48 × 48 elements.
The Europeanisation of Educational Leadership: Much Ado about Nothing?
ERIC Educational Resources Information Center
Clarke, Simon; Wildy, Helen
2009-01-01
This introductory article examines the elusive concept of Europeanisation and discusses the implications of this process for educational leadership, especially as it applies to the formation of school leaders. With an eye to Europeanisation, the article also investigates four pertinent themes extrapolated from the scholarly discussion contained in…
NASA Astrophysics Data System (ADS)
Kondo, Yoshiyuki; Suga, Keishi; Hibi, Koki; Okazaki, Toshihiko; Komeno, Toshihiro; Kunugi, Tomoaki; Serizawa, Akimi; Yoneda, Kimitoshi; Arai, Takahiro
2009-02-01
An advanced experimental technique has been developed to simulate two-phase flow behavior in a light water reactor (LWR). The technique applies three kinds of methods; (1) use of sulfur-hexafluoride (SF6) gas and ethanol (C2H5OH) liquid at atmospheric temperature and a pressure less than 1.0MPa, where the fluid properties are similar to steam-water ones in the LWR, (2) generation of bubble with a sintering tube, which simulates bubble generation on heated surface in the LWR, (3) measurement of detailed bubble distribution data with a bi-optical probe (BOP), (4) and measurement of liquid velocities with the tracer liquid. This experimental technique provides easy visualization of flows by using a large scale experimental apparatus, which gives three-dimensional flows, and measurement of detailed spatial distributions of two-phase flow. With this technique, we have carried out experiments simulating two-phase flow behavior in a single-channel geometry, a multi-rod-bundle one, and a horizontal-tube-bundle one on a typical natural circulation reactor system. Those experiments have clarified a) a flow regime map in a rod bundle on the transient region between bubbly and churn flow, b) three-dimensional flow behaviour in rod-bundles where inter-subassembly cross-flow occurs, c) bubble-separation behavior with consideration of reactor internal structures. The data have given analysis models for the natural circulation reactor design with good extrapolation.
Miller, Richard C.; Randers-Pehrson, Gerhard; Geard, Charles R.; Hall, Eric J.; Brenner, David J.
1999-01-01
Domestic, low-level exposure to radon gas is considered a major environmental lung-cancer hazard involving DNA damage to bronchial cells by α particles from radon progeny. At domestic exposure levels, the relevant bronchial cells are very rarely traversed by more than one α particle, whereas at higher radon levels—at which epidemiological studies in uranium miners allow lung-cancer risks to be quantified with reasonable precision—these bronchial cells are frequently exposed to multiple α-particle traversals. Measuring the oncogenic transforming effects of exactly one α particle without the confounding effects of multiple traversals has hitherto been unfeasible, resulting in uncertainty in extrapolations of risk from high to domestic radon levels. A technique to assess the effects of single α particles uses a charged-particle microbeam, which irradiates individual cells or cell nuclei with predefined exact numbers of particles. Although previously too slow to assess the relevant small oncogenic risks, recent improvements in throughput now permit microbeam irradiation of large cell numbers, allowing the first oncogenic risk measurements for the traversal of exactly one α particle through a cell nucleus. Given positive controls to ensure that the dosimetry and biological controls were comparable, the measured oncogenicity from exactly one α particle was significantly lower than for a Poisson-distributed mean of one α particle, implying that cells traversed by multiple α particles contribute most of the risk. If this result applies generally, extrapolation from high-level radon risks (involving cellular traversal by multiple α particles) may overestimate low-level (involving only single α particles) radon risks. PMID:9874764
Simultaneous computation of jet turbulence and noise
NASA Technical Reports Server (NTRS)
Berman, C. H.; Ramos, J. I.
1989-01-01
The existing flow computation methods, wave computation techniques, and theories based on noise source models are reviewed in order to assess the capabilities of numerical techniques to compute jet turbulence noise and understand the physical mechanisms governing it over a range of subsonic and supersonic nozzle exit conditions. In particular, attention is given to (1) methods for extrapolating near field information, obtained from flow computations, to the acoustic far field and (2) the numerical solution of the time-dependent Lilley equation.
Reduction and Analysis of Phosphor Thermography Data With the IHEAT Software Package
NASA Technical Reports Server (NTRS)
Merski, N. Ronald
1998-01-01
Detailed aeroheating information is critical to the successful design of a thermal protection system (TPS) for an aerospace vehicle. This report describes NASA Langley Research Center's (LaRC) two-color relative-intensity phosphor thermography method and the IHEAT software package which is used for the efficient data reduction and analysis of the phosphor image data. Development of theory is provided for a new weighted two-color relative-intensity fluorescence theory for quantitatively determining surface temperatures on hypersonic wind tunnel models; an improved application of the one-dimensional conduction theory for use in determining global heating mappings; and extrapolation of wind tunnel data to flight surface temperatures. The phosphor methodology at LaRC is presented including descriptions of phosphor model fabrication, test facilities and phosphor video acquisition systems. A discussion of the calibration procedures, data reduction and data analysis is given. Estimates of the total uncertainties (with a 95% confidence level) associated with the phosphor technique are shown to be approximately 8 to 10 percent in the Langley's 31-Inch Mach 10 Tunnel and 7 to 10 percent in the 20-Inch Mach 6 Tunnel. A comparison with thin-film measurements using two-inch radius hemispheres shows the phosphor data to be within 7 percent of thin-film measurements and to agree even better with predictions via a LATCH computational fluid dynamics solution (CFD). Good agreement between phosphor data and LAURA CFD computations on the forebody of a vertical takeoff/vertical lander configuration at four angles of attack is also shown. In addition, a comparison is given between Mach 6 phosphor data and laminar and turbulent solutions generated using the LAURA, GASP and LATCH CFD codes. Finally, the extrapolation method developed in this report is applied to the X-34 configuration with good agreement between the phosphor extrapolation and LAURA flight surface temperature predictions. The phosphor process outlined in the paper is believed to provide the aerothermodynamic community with a valuable capability for rapidly obtaining (4 to 5 weeks) detailed heating information needed in TPS design.
Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C
2017-07-01
Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.
NASA Technical Reports Server (NTRS)
Margaria, Tiziana (Inventor); Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor); Steffen, Bernard (Inventor)
2010-01-01
Systems, methods and apparatus are provided through which in some embodiments, automata learning algorithms and techniques are implemented to generate a more complete set of scenarios for requirements based programming. More specifically, a CSP-based, syntax-oriented model construction, which requires the support of a theorem prover, is complemented by model extrapolation, via automata learning. This may support the systematic completion of the requirements, the nature of the requirement being partial, which provides focus on the most prominent scenarios. This may generalize requirement skeletons by extrapolation and may indicate by way of automatically generated traces where the requirement specification is too loose and additional information is required.
Chlorine (CI2), a high-production volume air pollutant, is an irritant of interest to homeland security. Risk assessment approaches to establish egress or re-entry levels typically use an assumption based on Haber's Rule and apply a concentration times duration ("C x t") adjustme...
ERIC Educational Resources Information Center
Keyton, Joann
A study assessed the validity of applying the Spitzberg and Cupach dyadic model of communication competence to small group interaction. Twenty-four students, in five task-oriented work groups, completed questionnaires concerning self-competence, alter competence, interaction effectiveness, and other group members' interaction appropriateness. They…
Relationship Between Magnitude of Applied Spin Recovery Moment and Ensuing Number of Recovery Turns
NASA Technical Reports Server (NTRS)
Anglin, Ernie L.
1967-01-01
An analytical study has been made to investigate the relationship between the magnitude of the applied spin recovery moment and the ensuing number of turns made during recovery from a developed spin with a view toward determining how to interpolate or extrapolate spin recovery results with regard to determining the amount of control required for a satisfactory recovery. Five configurations were used which are considered to be representative of modern airplanes: a delta-wing fighter, a stub-wing research vehicle, a boostglide configuration, a supersonic trainer, and a sweptback-wing fighter. The results obtained indicate that there is a direct relationship between the magnitude of the applied spin recovery moments and the ensuing number of recovery turns made and that this relationship can be expressed in either simple multiplicative or exponential form. Either type of relationship was adequate for interpolating or extrapolating to predict turns required for recovery with satisfactory accuracy for configurations having relatively steady recovery motions. Any two recoveries from the same developed spin condition can be used as a basis for the predicted results provided these recoveries are obtained with the same ratio of recovery control deflections. No such predictive method can be expected to give satisfactory results for oscillatory recoveries.
Unveiling saturation effects from nuclear structure function measurements at the EIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
An application of data mining in district heating substations for improving energy performance
NASA Astrophysics Data System (ADS)
Xue, Puning; Zhou, Zhigang; Chen, Xin; Liu, Jing
2017-11-01
Automatic meter reading system is capable of collecting and storing a huge number of district heating (DH) data. However, the data obtained are rarely fully utilized. Data mining is a promising technology to discover potential interesting knowledge from vast data. This paper applies data mining methods to analyse the massive data for improving energy performance of DH substation. The technical approach contains three steps: data selection, cluster analysis and association rule mining (ARM). Two-heating-season data of a substation are used for case study. Cluster analysis identifies six distinct heating patterns based on the primary heat of the substation. ARM reveals that secondary pressure difference and secondary flow rate have a strong correlation. Using the discovered rules, a fault occurring in remote flow meter installed at secondary network is detected accurately. The application demonstrates that data mining techniques can effectively extrapolate potential useful knowledge to better understand substation operation strategies and improve substation energy performance.
High-level ab initio enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan.
Feller, David; Simmie, John M
2012-11-29
A high-level ab initio thermochemical technique, known as the Feller-Petersen-Dixon method, is used to calculate the total atomization energies and hence the enthalpies of formation of 2,5-dimethylfuran, 2-methylfuran, and furan itself as a means of rationalizing significant discrepancies in the literature. In order to avoid extremely large standard coupled cluster theory calculations, the explicitly correlated CCSD(T)-F12b variation was used with basis sets up to cc-pVQZ-F12. After extrapolating to the complete basis set limit and applying corrections for core/valence, scalar relativistic, and higher order effects, the final Δ(f)H° (298.15 K) values, with the available experimental values in parentheses are furan -34.8 ± 3 (-34.7 ± 0.8), 2-methylfuran -80.3 ± 5 (-76.4 ± 1.2), and 2,5-dimethylfuran -124.6 ± 6 (-128.1 ± 1.1) kJ mol(-1). The theoretical results exhibit a compelling internal consistency.
Ionospheric gravity wave measurements with the USU dynasonde
NASA Technical Reports Server (NTRS)
Berkey, Frank T.; Deng, Jun Yuan
1992-01-01
A method for the measurement of ionospheric Gravity Wave (GW) using the USU Dynasonde is outlined. This method consists of a series of individual procedures, which includes functions for data acquisition, adaptive scaling, polarization discrimination, interpolation and extrapolation, digital filtering, windowing, spectrum analysis, GW detection, and graphics display. Concepts of system theory are applied to treat the ionosphere as a system. An adaptive ionogram scaling method was developed for automatically extracting ionogram echo traces from noisy raw sounding data. The method uses the well known Least Mean Square (LMS) algorithm to form a stochastic optimal estimate of the echo trace which is then used to control a moving window. The window tracks the echo trace, simultaneously eliminating the noise and interference. Experimental results show that the proposed method functions as designed. Case studies which extract GW from ionosonde measurements were carried out using the techniques described. Geophysically significant events were detected and the resultant processed results are illustrated graphically. This method was also developed for real time implementation in mind.
Fast Coherent Differential Imaging for Exoplanet Imaging
NASA Astrophysics Data System (ADS)
Gerard, Benjamin; Marois, Christian; Galicher, Raphael; Veran, Jean-Pierre; Macintosh, B.; Guyon, O.; Lozi, J.; Pathak, P.; Sahoo, A.
2018-06-01
Direct detection and detailed characterization of exoplanets using extreme adaptive optics (ExAO) is a key science goal of future extremely large telescopes and space observatories. However, quasi-static wavefront errors will limit the sensitivity of this endeavor. Additional limitations for ground-based telescopes arise from residual AO-corrected atmospheric wavefront errors, generating short-lived aberrations that will average into a halo over a long exposure, also limiting the sensitivity of exoplanet detection. We develop the framework for a solution to both of these problems using the self-coherent camera (SCC), to be applied to ground-based telescopes, called Fast Atmospheric SCC Technique (FAST). Simulations show that for typical ExAO targets the FAST approach can reach ~100 times better in raw contrast than what is currently achieved with ExAO instruments if we extrapolate for an hour of observing time, illustrating that the sensitivity improvement from this method could play an essential role in the future ground-based detection and characterization of lower mass/colder exoplanets.
Janardhanan, Sathyanarayana; Wang, Martha O; Fisher, John P
2012-08-01
The use of pluripotent stem cell populations for bone tissue regeneration provides many opportunities and challenges within the bone tissue engineering field. For example, coculture strategies have been utilized to mimic embryological development of bone tissue, and particularly the critical intercellular signaling pathways. While research in bone biology over the last 20 years has expanded our understanding of these intercellular signaling pathways, we still do not fully understand the impact of the system's physical characteristics (orientation, geometry, and morphology). This review of coculture literature delineates the various forms of coculture systems and their respective outcomes when applied to bone tissue engineering. To understand fully the key differences between the different coculture methods, we must appreciate the underlying paradigms of physiological interactions. Recent advances have enabled us to extrapolate these techniques to larger dimensions and higher geometric resolutions. Finally, the contributions of bioreactors, micropatterned biomaterials, and biomaterial interaction platforms are evaluated to give a sense of the sophistication established by a combination of these concepts with coculture systems.
1995 second modulator-klystron workshop: A modulator-klystron workshop for future linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-03-01
This second workshop examined the present state of modulator design and attempted an extrapolation for future electron-positron linear colliders. These colliders are currently viewed as multikilometer-long accelerators consisting of a thousand or more RF sources with 500 to 1,000, or more, pulsed power systems. The workshop opened with two introductory talks that presented the current approaches to designing these linear colliders, the anticipated RF sources, and the design constraints for pulse power. The cost of main AC power is a major economic consideration for a future collider, consequently the workshop investigated efficient modulator designs. Techniques that effectively apply the artmore » of power conversion, from the AC mains to the RF output, and specifically, designs that generate output pulses with very fast rise times as compared to the flattop. There were six sessions that involved one or more presentations based on problems specific to the design and production of thousands of modulator-klystron stations, followed by discussion and debate on the material.« less
Unveiling saturation effects from nuclear structure function measurements at the EIC
Marquet, Cyrille; Moldes, Manoel R.; Zurita, Pia
2017-07-21
Here, we analyze the possibility of extracting a clear signal of non-linear parton saturation effects from future measurements of nuclear structure functions at the Electron–Ion Collider (EIC), in the small-x region. Our approach consists in generating pseudodata for electron-gold collisions, using the running-coupling Balitsky–Kovchegov evolution equation, and in assessing the compatibility of these saturated pseudodata with existing sets of nuclear parton distribution functions (nPDFs), extrapolated if necessary. The level of disagreement between the two is quantified by applying a Bayesian reweighting technique. This allows to infer the parton distributions needed in order to describe the pseudodata, which we find quitemore » different from the actual distributions, especially for sea quarks and gluons. This tension suggests that, should saturation effects impact the future nuclear structure function data as predicted, a successful refitting of the nPDFs may not be achievable, which would unambiguously signal the presence of non-linear effects.« less
NASA Astrophysics Data System (ADS)
Dalmasse, K.; Pariat, É.; Valori, G.; Jing, J.; Démoulin, P.
2018-01-01
In the solar corona, magnetic helicity slowly and continuously accumulates in response to plasma flows tangential to the photosphere and magnetic flux emergence through it. Analyzing this transfer of magnetic helicity is key for identifying its role in the dynamics of active regions (ARs). The connectivity-based helicity flux density method was recently developed for studying the 2D and 3D transfer of magnetic helicity in ARs. The method takes into account the 3D nature of magnetic helicity by explicitly using knowledge of the magnetic field connectivity, which allows it to faithfully track the photospheric flux of magnetic helicity. Because the magnetic field is not measured in the solar corona, modeled 3D solutions obtained from force-free magnetic field extrapolations must be used to derive the magnetic connectivity. Different extrapolation methods can lead to markedly different 3D magnetic field connectivities, thus questioning the reliability of the connectivity-based approach in observational applications. We address these concerns by applying this method to the isolated and internally complex AR 11158 with different magnetic field extrapolation models. We show that the connectivity-based calculations are robust to different extrapolation methods, in particular with regard to identifying regions of opposite magnetic helicity flux. We conclude that the connectivity-based approach can be reliably used in observational analyses and is a promising tool for studying the transfer of magnetic helicity in ARs and relating it to their flaring activity.
Use of Physiologically Based Pharmacokinetic (PBPK) Models ...
EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. This report describes and demonstrates techniques necessary to extrapolate and incorporate in vitro derived metabolic rate constants in PBPK models. It also includes two case study examples designed to demonstrate the applicability of such data for health risk assessment and addresses the quantification, extrapolation and interpretation of advanced biochemical information on human interindividual variability of chemical metabolism for risk assessment application. It comprises five chapters; topics and results covered in the first four chapters have been published in the peer reviewed scientific literature. Topics covered include: Data Quality ObjectivesExperimental FrameworkRequired DataTwo example case studies that develop and incorporate in vitro metabolic rate constants in PBPK models designed to quantify human interindividual variability to better direct the choice of uncertainty factors for health risk assessment. This report is intended to serve as a reference document for risk assors to use when quantifying, extrapolating, and interpretating advanced biochemical information about human interindividual variability of chemical metabolism.
Setty, O H; Shrager, R I; Bunow, B; Reynafarje, B; Lehninger, A L; Hendler, R W
1986-01-01
The problem of obtaining very early ratios for the H+/O stoichiometry accompanying succinate oxidation by rat liver mitochondria was attacked using new techniques for direct measurement rather than extrapolations based on data obtained after mixing and the recovery of the electrode from initial injection of O2. Respiration was quickly initiated in a thoroughly mixed O2-containing suspension of mitochondria under a CO atmosphere by photolysis of the CO-cytochrome c oxidase complex-. Fast responding O2 and pH electrodes were used to collect data every 10 ms. The response time for each electrode was experimentally measured in each experiment and suitable corrections for electrode relaxations were made. With uncorrected data obtained after 0.8 s, the extrapolation back to zero time on the basis of single-exponential curve fitting confirmed values close to 8.0 as previously reported (Costa et al., 1984). The data directly obtained, however, indicate an initial burst in H+/O ratio that peaked to values of approximately 20 to 30 prior to 50 ms and which was no longer evident after 0.3 s. Newer information and considerations that place all extrapolation methods in question are discussed. PMID:3019443
NASA Technical Reports Server (NTRS)
Zapata, R. N.; Humphris, R. R.; Henderson, K. C.
1974-01-01
Based on the premises that (1) magnetic suspension techniques can play a useful role in large-scale aerodynamic testing and (2) superconductor technology offers the only practical hope for building large-scale magnetic suspensions, an all-superconductor three-component magnetic suspension and balance facility was built as a prototype and was tested successfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities have been made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.
NASA Technical Reports Server (NTRS)
Zapata, R. N.; Humphris, R. R.; Henderson, K. C.
1975-01-01
Based on the premises that magnetic suspension techniques can play a useful role in large scale aerodynamic testing, and that superconductor technology offers the only practical hope for building large scale magnetic suspensions, an all-superconductor 3-component magnetic suspension and balance facility was built as a prototype and tested sucessfully. Quantitative extrapolations of design and performance characteristics of this prototype system to larger systems compatible with existing and planned high Reynolds number facilities at Langley Research Center were made and show that this experimental technique should be particularly attractive when used in conjunction with large cryogenic wind tunnels.
Ahn, Jae-Hyun; Park, Young-Je; Kim, Wonkook; Lee, Boram
2016-12-26
An estimation of the aerosol multiple-scattering reflectance is an important part of the atmospheric correction procedure in satellite ocean color data processing. Most commonly, the utilization of two near-infrared (NIR) bands to estimate the aerosol optical properties has been adopted for the estimation of the effects of aerosols. Previously, the operational Geostationary Color Ocean Imager (GOCI) atmospheric correction scheme relies on a single-scattering reflectance ratio (SSE), which was developed for the processing of the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) data to determine the appropriate aerosol models and their aerosol optical thicknesses. The scheme computes reflectance contributions (weighting factor) of candidate aerosol models in a single scattering domain then spectrally extrapolates the single-scattering aerosol reflectance from NIR to visible (VIS) bands using the SSE. However, it directly applies the weight value to all wavelengths in a multiple-scattering domain although the multiple-scattering aerosol reflectance has a non-linear relationship with the single-scattering reflectance and inter-band relationship of multiple scattering aerosol reflectances is non-linear. To avoid these issues, we propose an alternative scheme for estimating the aerosol reflectance that uses the spectral relationships in the aerosol multiple-scattering reflectance between different wavelengths (called SRAMS). The process directly calculates the multiple-scattering reflectance contributions in NIR with no residual errors for selected aerosol models. Then it spectrally extrapolates the reflectance contribution from NIR to visible bands for each selected model using the SRAMS. To assess the performance of the algorithm regarding the errors in the water reflectance at the surface or remote-sensing reflectance retrieval, we compared the SRAMS atmospheric correction results with the SSE atmospheric correction using both simulations and in situ match-ups with the GOCI data. From simulations, the mean errors for bands from 412 to 555 nm were 5.2% for the SRAMS scheme and 11.5% for SSE scheme in case-I waters. From in situ match-ups, 16.5% for the SRAMS scheme and 17.6% scheme for the SSE scheme in both case-I and case-II waters. Although we applied the SRAMS algorithm to the GOCI, it can be applied to other ocean color sensors which have two NIR wavelengths.
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
Incorporating contact angles in the surface tension force with the ACES interface curvature scheme
NASA Astrophysics Data System (ADS)
Owkes, Mark
2017-11-01
In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).
Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Wilhelm, Jan; Walz, Michael; Stendel, Melanie; Bagrets, Alexei; Evers, Ferdinand
2013-05-14
We present a modification of the standard electron transport methodology based on the (non-equilibrium) Green's function formalism to efficiently simulate STM-images. The novel feature of this method is that it employs an effective embedding technique that allows us to extrapolate properties of metal substrates with adsorbed molecules from quantum-chemical cluster calculations. To illustrate the potential of this approach, we present an application to STM-images of C58-dimers immobilized on Au(111)-surfaces that is motivated by recent experiments.
Fast, Computer Supported Experimental Determination of Absolute Zero Temperature at School
ERIC Educational Resources Information Center
Bogacz, Bogdan F.; Pedziwiatr, Antoni T.
2014-01-01
A simple and fast experimental method of determining absolute zero temperature is presented. Air gas thermometer coupled with pressure sensor and data acquisition system COACH is applied in a wide range of temperature. By constructing a pressure vs temperature plot for air under constant volume it is possible to obtain--by extrapolation to zero…
The Acceptance of Exceptionality: A Three Dimensional Model.
ERIC Educational Resources Information Center
Martin, Larry L.; Nivens, Maryruth K.
A model extrapolates from E. Kubler-Ross' conception of the stages of grief to apply to parent and family reactions when an exceptionality is identified. A chart lists possible parent feelings and reactions, possible school reactions to the parent in grief, and the child's reactions during each of five stages: denial, rage and anger, bargaining,…
Teaching Turkish as a Foreign Language: Extrapolating from Experimental Psychology
ERIC Educational Resources Information Center
Erdener, Dogu
2017-01-01
Speech perception is beyond the auditory domain and a multimodal process, specifically, an auditory-visual one--we process lip and face movements during speech. In this paper, the findings in cross-language studies of auditory-visual speech perception in the past two decades are interpreted to the applied domain of second language (L2)…
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories (“Animal”, “Mammal”, “Invertebrate”, “Plant”, “Seed”, “Fruit”, and “Leaf”) had high proportions of correctly predicted diet ranks; and (3) the potential of correctly extrapolating specific diet categories varied both within and among clades. Global maps of species richness and proportion showed congruence among trophic levels, but also substantial discrepancies between dietary guilds. MammalDIET provides a comprehensive, unique and freely available dataset on diet preferences for all terrestrial mammals worldwide. It enables broad-scale analyses for specific trophic levels and dietary guilds, and a first assessment of trait conservatism in mammalian diet preferences at a global scale. The digitalization, extrapolation and validation procedures could be transferable to other trait data and taxa. PMID:25165528
An efficient approach to imaging underground hydraulic networks
NASA Astrophysics Data System (ADS)
Kumar, Mohi
2012-07-01
To better locate natural resources, treat pollution, and monitor underground networks associated with geothermal plants, nuclear waste repositories, and carbon dioxide sequestration sites, scientists need to be able to accurately characterize and image fluid seepage pathways below ground. With these images, scientists can gain knowledge of soil moisture content, the porosity of geologic formations, concentrations and locations of dissolved pollutants, and the locations of oil fields or buried liquid contaminants. Creating images of the unknown hydraulic environments underfoot is a difficult task that has typically relied on broad extrapolations from characteristics and tests of rock units penetrated by sparsely positioned boreholes. Such methods, however, cannot identify small-scale features and are very expensive to reproduce over a broad area. Further, the techniques through which information is extrapolated rely on clunky and mathematically complex statistical approaches requiring large amounts of computational power.
Characteristics of enhanced-mode AlGaN/GaN MIS HEMTs for millimeter wave applications
NASA Astrophysics Data System (ADS)
Lee, Jong-Min; Ahn, Ho-Kyun; Jung, Hyun-Wook; Shin, Min Jeong; Lim, Jong-Won
2017-09-01
In this paper, an enhanced-mode (E-mode) AlGaN/GaN high electron mobility transistor (HEMT) was developed by using 4-inch GaN HEMT process. We designed and fabricated Emode HEMTs and characterized device performance. To estimate the possibility of application for millimeter wave applications, we focused on the high frequency performance and power characteristics. To shift the threshold voltage of HEMTs we applied the Al2O3 insulator to the gate structure and adopted the gate recess technique. To increase the frequency performance the e-beam lithography technique was used to define the 0.15 um gate length. To evaluate the dc and high frequency performance, electrical characterization was performed. The threshold voltage was measured to be positive value by linear extrapolation from the transfer curve. The device leakage current is comparable to that of the depletion mode device. The current gain cut-off frequency and the maximum oscillation frequency of the E-mode device with a total gate width of 150 um were 55 GHz and 168 GHz, respectively. To confirm the power performance for mm-wave applications the load-pull test was performed. The measured power density of 2.32 W/mm was achieved at frequencies of 28 and 30 GHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungate, Bruce; Pett-Ridge, Jennifer; Blazewicz, Steven
In this project, we developed an innovative and ground-breaking technique, quantitative stable isotope probing, a technique that uses density separation of nucleic acids as a quantitative measurement technique. This work is substantial because it advances SIP beyond the qualitative technique that has dominate the field for years. The first methods paper was published in Applied and Environmental Microbiology (Hungate et al. 2015), and this paper describes the mathematical model underlying the quantitative interpretation. A second methods paper (Schwartz et al. 2015) provides a conceptual overview of the method and its application to research problems. A third methods paper was justmore » published (Koch et al. 2018), in which we develop the quantitative model combining sequencing and isotope data to estimate actual rates of microbial growth and death in natural populations. This work has met much enthusiasm in scientific presentations around the world. It has met with equally enthusiastic resistance in the peer-review process, though our record of publication to date argues that people are accepting the merits of the approach. The skepticism and resistance are also potentially signs that this technique is pushing the field forward, albeit with some of the discomfort that accompanies extrapolation. Part of this is a cultural element in the field – the field of microbiology is not accustomed to the assumptions of ecosystem science. Research conducted in this project has pushed the philosophical perspective that major advances can occur when we advocate a sound merger between the traditions of strong inference in microbiology with those of grounded scaling in ecosystem science.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hungate, Bruce; PettRidge, Jennifer; Blazewicz, St
In this project, we developed an innovative and groundbreaking technique, quantitative stable isotope probing, a technique that uses density separation of nucleic acids as a quantitative measurement technique. This work is substantial because it advances SIP beyond the qualitative technique that has dominate the field for years. The first methods paper was published in Applied and Environmental Microbiology (Hungate et al. 2015), and this paper describes the mathematical model underlying the quantitative interpretation. A second methods paper (Schwartz et al. 2015) provides a conceptual overview of the method and its application to research problems. A third methods paper was justmore » published (Koch et al. 2018), in which we develop the quantitative model combining sequencing and isotope data to estimate actual rates of microbial growth and death in natural populations. This work has met much enthusiasm in scientific presentations around the world. It has met with equally enthusiastic resistance in the peerreview process, though our record of publication to date argues that people are accepting the merits of the approach. The skepticism and resistance are also potentially signs that this technique is pushing the field forward, albeit with some of the discomfort that accompanies extrapolation. Part of this is a cultural element in the field – the field of microbiology is not accustomed to the assumptions of ecosystem science. Research conducted in this project has pushed the philosophical perspective that major advances can occur when we advocate a sound merger between the traditions of strong inference in microbiology with those of grounded scaling in ecosystem science.« less
Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka
2017-01-01
Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...
Celestial mechanics during the last two decades
NASA Technical Reports Server (NTRS)
Szebehely, V.
1978-01-01
The unprecedented progress in celestial mechanics (orbital mechanics, astrodynamics, space dynamics) is reviewed from 1957 to date. The engineering, astronomical and mathematical aspects are synthesized. The measuring and computational techniques developed parallel with the theoretical advances are outlined. Major unsolved problem areas are listed with proposed approaches for their solutions. Extrapolations and predictions of the progress for the future conclude the paper.
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
Ballante, Flavio; Marshall, Garland R
2016-01-25
Molecular docking is a widely used technique in drug design to predict the binding pose of a candidate compound in a defined therapeutic target. Numerous docking protocols are available, each characterized by different search methods and scoring functions, thus providing variable predictive capability on a same ligand-protein system. To validate a docking protocol, it is necessary to determine a priori the ability to reproduce the experimental binding pose (i.e., by determining the docking accuracy (DA)) in order to select the most appropriate docking procedure and thus estimate the rate of success in docking novel compounds. As common docking programs use generally different root-mean-square deviation (RMSD) formulas, scoring functions, and format results, it is both difficult and time-consuming to consistently determine and compare their predictive capabilities in order to identify the best protocol to use for the target of interest and to extrapolate the binding poses (i.e., best-docked (BD), best-cluster (BC), and best-fit (BF) poses) when applying a given docking program over thousands/millions of molecules during virtual screening. To reduce this difficulty, two new procedures called Clusterizer and DockAccessor have been developed and implemented for use with some common and "free-for-academics" programs such as AutoDock4, AutoDock4(Zn), AutoDock Vina, DOCK, MpSDockZn, PLANTS, and Surflex-Dock to automatically extrapolate BD, BC, and BF poses as well as to perform consistent cluster and DA analyses. Clusterizer and DockAccessor (code available over the Internet) represent two novel tools to collect computationally determined poses and detect the most predictive docking approach. Herein an application to human lysine deacetylase (hKDAC) inhibitors is illustrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayhurst, Thomas Laine
1980-08-06
Techniques for applying ab-initio calculations to the is of atomic spectra are investigated, along with the relationship between the semi-empirical and ab-initio forms of Slater-Condon theory. Slater-Condon theory is reviewed with a focus on the essential features that lead to the effective Hamiltonians associated with the semi-empirical form of the theory. Ab-initio spectroscopic parameters are calculated from wavefunctions obtained via self-consistent field methods, while multi-configuration Hamiltonian matrices are constructed and diagonalized with computer codes written by Robert Cowan of Los Alamos Scientific Laboratory. Group theoretical analysis demonstrates that wavefunctions more general than Slater determinants (i.e. wavefunctions with radial correlations betweenmore » electrons) lead to essentially the same parameterization of effective Hamiltonians. In the spirit of this analysis, a strategy is developed for adjusting ab-initio values of the spectroscopic parameters, reproducing parameters obtained by fitting the corresponding effective Hamiltonian. Secondary parameters are used to "screen" the calculated (primary) spectroscopic parameters, their values determined by least squares. Extrapolations of the secondary parameters determined from analyzed spectra are attempted to correct calculations of atoms and ions without experimental levels. The adjustment strategy and extrapolations are tested on the K I sequence from K 0+ through Fe 7+, fitting to experimental levels for V 4+, and Cr 5+; unobserved levels and spectra are predicted for several members of the sequence. A related problem is also discussed: Energy levels of the Uranium hexahalide complexes, (UX 6) 2- for X= F, Cl, Br, and I, are fit to an effective Hamiltonian (the f 2 configuration in O h symmetry) with corrections proposed by Brian Judd.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Shuai, E-mail: shuai.wei@asu.edu; Department of Materials Science and Engineering, University of Arizona, Tucson, Arizona 85712; Lucas, Pierre
2015-07-21
A striking anomaly in the viscosity of Te{sub 85}Ge{sub 15} alloys noted by Greer and coworkers from the work of Neumann et al. is reminiscent of the equally striking comparison of liquid tellurium and water anomalies documented long ago by Kanno et al. In view of the power laws that are used to fit the data on water, we analyze the data on Te{sub 85}Ge{sub 15} using the Speedy-Angell power-law form, and find a good account with a singularity T{sub s} only 25 K below the eutectic temperature. However, the heat capacity data in this case are not diverging, but insteadmore » exhibit a sharp maximum like that observed in fast cooling in the Molinero-Moore model of water. Applying the Adam-Gibbs viscosity equation to these calorimetric data, we find that there must be a fragile-to-strong liquid transition at the heat capacity peak temperature, and then predict the 'strong' liquid course of the viscosity down to T{sub g} at 406 K (403.6 K at 20 K min{sup −1} in this study). Since crystallization can be avoided by moderately fast cooling in this case, we can check the validity of the extrapolation by making a direct measurement of fragility at T{sub g}, using differential scanning calorimetric techniques, and then comparing with the value from the extrapolated viscosity at T{sub g}. The agreement is encouraging, and prompts discussion of relations between water and phase change alloy anomalies.« less
Tests and applications of nonlinear force-free field extrapolations in spherical geometry
NASA Astrophysics Data System (ADS)
Guo, Y.; Ding, M. D.
2013-07-01
We test a nonlinear force-free field (NLFFF) optimization code in spherical geometry with an analytical solution from Low and Lou. The potential field source surface (PFSS) model is served as the initial and boundary conditions where observed data are not available. The analytical solution can be well recovered if the boundary and initial conditions are properly handled. Next, we discuss the preprocessing procedure for the noisy bottom boundary data, and find that preprocessing is necessary for NLFFF extrapolations when we use the observed photospheric magnetic field as bottom boundaries. Finally, we apply the NLFFF model to a solar area where four active regions interacting with each other. An M8.7 flare occurred in one active region. NLFFF modeling in spherical geometry simultaneously constructs the small and large scale magnetic field configurations better than the PFSS model does.
NASA Astrophysics Data System (ADS)
Zhou, Shiqi
2017-11-01
A new scheme is put forward to determine the wetting temperature (Tw) by utilizing the adaptation of arc-length continuation algorithm to classical density functional theory (DFT) used originally by Frink and Salinger, and its advantages are summarized into four points: (i) the new scheme is applicable whether the wetting occurs near a planar or a non-planar surface, whereas a zero contact angle method is considered only applicable to a perfectly flat solid surface, as demonstrated previously and in this work, and essentially not fit for non-planar surface. (ii) The new scheme is devoid of an uncertainty, which plagues a pre-wetting extrapolation method and originates from an unattainability of the infinitely thick film in the theoretical calculation. (iii) The new scheme can be similarly and easily applied to extreme instances characterized by lower temperatures and/or higher surface attraction force field, which, however, can not be dealt with by the pre-wetting extrapolation method because of the pre-wetting transition being mixed with many layering transitions and the difficulty in differentiating varieties of the surface phase transitions. (iv) The new scheme still works in instance wherein the wetting transition occurs close to the bulk critical temperature; however, this case completely can not be managed by the pre-wetting extrapolation method because near the bulk critical temperature the pre-wetting region is extremely narrow, and no enough pre-wetting data are available for use of the extrapolation procedure.
NMR measurement of bitumen at different temperatures.
Yang, Zheng; Hirasaki, George J
2008-06-01
Heavy oil (bitumen) is characterized by its high viscosity and density, which is a major obstacle to both well logging and recovery. Due to the lost information of T2 relaxation time shorter than echo spacing (TE) and interference of water signal, estimation of heavy oil properties from NMR T2 measurements is usually problematic. In this work, a new method has been developed to overcome the echo spacing restriction of NMR spectrometer during the application to heavy oil (bitumen). A FID measurement supplemented the start of CPMG. Constrained by its initial magnetization (M0) estimated from the FID and assuming log normal distribution for bitumen, the corrected T2 relaxation time of bitumen sample can be obtained from the interpretation of CPMG data. This new method successfully overcomes the TE restriction of the NMR spectrometer and is nearly independent on the TE applied in the measurement. This method was applied to the measurement at elevated temperatures (8-90 degrees C). Due to the significant signal-loss within the dead time of FID, the directly extrapolated M0 of bitumen at relatively lower temperatures (<60 degrees C) was found to be underestimated. However, resulting from the remarkably lowered viscosity, the extrapolated M0 of bitumen at over 60 degrees C can be reasonably assumed to be the real value. In this manner, based on the extrapolation at higher temperatures (> or = 60 degrees C), the M0 value of bitumen at lower temperatures (<60 degrees C) can be corrected by Curie's Law. Consequently, some important petrophysical properties of bitumen, such as hydrogen index (HI), fluid content and viscosity were evaluated by using corrected T2.
High throughput method to characterize acid-base properties of insoluble drug candidates in water.
Benito, D E; Acquaviva, A; Castells, C B; Gagliardi, L G
2018-05-30
In drug design experimental characterization of acidic groups in candidate molecules is one of the more important steps prior to the in-vivo studies. Potentiometry combined with Yasuda-Shedlovsky extrapolation is one of the more important strategy to study drug candidates with low solubility in water, although, it requires a large number of sequences to determine pK a values at different solvent-mixture compositions to, finally, obtain the pK a in water (pwwK a ) by extrapolation. We have recently proposed a method which requires only two sequences of additions to study the effect of organic solvent content in liquid chromatography mobile phases on the acidity of the buffer compounds usually dissolved in it along wide ranges of compositions. In this work we propose to apply this method to study thermodynamic pwwK a of drug candidates with low solubilities in pure water. Using methanol/water solvent mixtures we study six pharmaceutical drugs at 25 °C. Four of them: ibuprofen, salicylic acid, atenolol and labetalol, were chosen as members of carboxylic, amine and phenol families, respectively. Since these compounds have known pwwK a values, they were used to validate the procedure, the accuracy of Yasuda-Shedlovsky and other empirical models to fit the behaviors, and to obtain pwwK a by extrapolation. Finally, the method is applied to determine unknown thermodynamic pwwK a values of two pharmaceutical drugs: atorvastatin calcium and the two dissociation constants of ethambutol. The procedure proved to be simple, very fast and accurate in all of the studied cases. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantitative dose-response assessment of inhalation exposures to toxic air pollutants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarabek, A.M.; Foureman, G.L.; Gift, J.S.
1997-12-31
Implementation of the 1990 Clean Air Act Amendments, including evaluation of residual risks. requires accurate human health risk estimates of both acute and chronic inhalation exposures to toxic air pollutants. The U.S. Environmental Protection Agency`s National Center for Environmental Assessment, Research Triangle Park, NC, has a research program that addresses several key issues for development of improved quantitative approaches for dose-response assessment. This paper describes three projects underway in the program. Project A describes a Bayesian approach that was developed to base dose-response estimates on combined data sets and that expresses these estimates as probability density functions. A categorical regressionmore » model has been developed that allows for the combination of all available acute data, with toxicity expressed as severity categories (e.g., mild, moderate, severe), and with both duration and concentration as governing factors. Project C encompasses two refinements to uncertainty factors (UFs) often applied to extrapolate dose-response estimates from laboratory animal data to human equivalent concentrations. Traditional UFs have been based on analyses of oral administration and may not be appropriate for extrapolation of inhalation exposures. Refinement of the UF applied to account for the use of subchronic rather than chronic data was based on an analysis of data from inhalation exposures (Project C-1). Mathematical modeling using the BMD approach was used to calculate the dose-response estimates for comparison between the subchronic and chronic data so that the estimates were not subject to dose-spacing or sample size variability. The second UF that was refined for extrapolation of inhalation data was the adjustment for the use of a LOAEL rather than a NOAEL (Project C-2).« less
Sabin, Keith; Zhao, Jinkou; Garcia Calleja, Jesus Maria; Sheng, Yaou; Arias Garcia, Sonia; Reinisch, Annette; Komatsu, Ryuichi
2016-01-01
Objective To assess the availability and quality of population size estimations of female sex workers (FSW), men who have sex with men (MSM), people who inject drug (PWID) and transgender women. Methods Size estimation data since 2010 were retrieved from global reporting databases, Global Fund grant application documents, and the peer-reviewed and grey literature. Overall quality and availability were assessed against a defined set of criteria, including estimation methods, geographic coverage, and extrapolation approaches. Estimates were compositely categorized into ‘nationally adequate’, ‘nationally inadequate but locally adequate’, ‘documented but inadequate methods’, ‘undocumented or untimely’ and ‘no data.’ Findings Of 140 countries assessed, 41 did not report any estimates since 2010. Among 99 countries with at least one estimate, 38 were categorized as having nationally adequate estimates and 30 as having nationally inadequate but locally adequate estimates. Multiplier, capture-recapture, census and enumeration, and programmatic mapping were the most commonly used methods. Most countries relied on only one estimate for a given population while about half of all reports included national estimates. A variety of approaches were applied to extrapolate from sites-level numbers to national estimates in two-thirds of countries. Conclusions Size estimates for FSW, MSM, PWID and transgender women are increasingly available but quality varies widely. The different approaches present challenges for data use in design, implementation and evaluation of programs for these populations in half of the countries assessed. Guidance should be further developed to recommend: a) applying multiple estimation methods; b) estimating size for a minimum number of sites; and, c) documenting extrapolation approaches. PMID:27163256
Models of Fate and Transport of Pollutants in Surface Waters
NASA Astrophysics Data System (ADS)
Okome, Gloria Eloho
There is the need to answer very crucial questions of "what happens to pollutants in surface waters?" This question must be answered to determine the factors controlling fate and transport of chemicals and their evolutionary state in surface waters. Monitoring and experimental methods are used in establishing the environmental states. These measurements are used with the known scientific principles to identify processes and to estimate the future environmental conditions. Conceptual and computational models are needed to analyze environmental processes by applying the knowledge gained from experimentation and theory. Usually, a computational framework includes the mathematics and the physics of the phenomenon, and the measured characteristics to model pollutants interactions and transport in surface water. However, under certain conditions, the complexity of the situation in the actual environment precludes the utilization of these techniques. Pollutants in several forms: Nitrogen (Nitrate, Nitrite, Kjeldhal Nitrogen and Ammonia), Phosphorus (orthophosphate and total phosphorus), bacteria (E-coli and Fecal coliform), Salts (Chloride and Sulfate) are chosen to follow for this research. The objective of this research is to model the fate and transport of these pollutants in non-ideal conditions of surface water measurements and to develop computational methods to forecast their fate and transport. In an environment of extreme drought such as in the Brazos River basin, where small streams flow intermittently, there is added complexity due to the absence of regularly sampled data. The usual modeling techniques are no longer applicable because of sparse measurements in space and time. Still, there is a need to estimate the conditions of the environment from the information that is present. Alternative methods for this estimation must be devised and applied to this situation, which is the task of this dissertation. This research devices a forecasting technique that is based upon sparse data. The method uses the equations of functions that fit the time series data for pollutants at each water quality monitoring stations to interpolate and extrapolate the data and to make estimates of present and future pollution levels. This method was applied to data obtained from the Leon River watershed (Indian creek) and Navasota River.
A Theorem and its Application to Finite Tampers
DOE R&D Accomplishments Database
Feynman, R. P.
1946-08-15
A theorem is derived which is useful in the analysis of neutron problems in which all neutrons have the same velocity. It is applied to determine extrapolated end-points, the asymptotic amplitude from a point source, and the neutron density at the surface of a medium. Formulas fro the effect of finite tampers are derived by its aid, and their accuracy discussed.
Absolute calibration of Doppler coherence imaging velocity images
NASA Astrophysics Data System (ADS)
Samuell, C. M.; Allen, S. L.; Meyer, W. H.; Howard, J.
2017-08-01
A new technique has been developed for absolutely calibrating a Doppler Coherence Imaging Spectroscopy interferometer for measuring plasma ion and neutral velocities. An optical model of the interferometer is used to generate zero-velocity reference images for the plasma spectral line of interest from a calibration source some spectral distance away. Validation of this technique using a tunable diode laser demonstrated an accuracy better than 0.2 km/s over an extrapolation range of 3.5 nm; a two order of magnitude improvement over linear approaches. While a well-characterized and very stable interferometer is required, this technique opens up the possibility of calibrated velocity measurements in difficult viewing geometries and for complex spectral line-shapes.
Trajectory tracking and backfitting techniques against theater ballistic missiles
NASA Astrophysics Data System (ADS)
Hutchins, Robert G.; Britt, Patrick T.
1999-10-01
Since the SCUD launches in the Gulf War, theater ballistic missile (TBM) systems have become a growing concern for the US military. Detection, fast track initiation, backfitting for launch point determination, and tracking and engagement during boost phase or shortly after booster cutoff are goals that grow in importance with the proliferation of weapons of mass destruction. This paper focuses on track initiation and backfitting techniques, as well as extending some earlier results on tracking a TBM during boost phase cutoff. Results indicate that Kalman techniques are superior to third order polynomial extrapolations in estimating the launch point, and that some knowledge of missile parameters, especially thrust, is extremely helpful in track initiation.
Tools and techniques for estimating high intensity RF effects
NASA Astrophysics Data System (ADS)
Zacharias, Richard L.; Pennock, Steve T.; Poggio, Andrew J.; Ray, Scott L.
1992-01-01
Tools and techniques for estimating and measuring coupling and component disturbance for avionics and electronic controls are described. A finite-difference-time-domain (FD-TD) modeling code, TSAR, used to predict coupling is described. This code can quickly generate a mesh model to represent the test object. Some recent applications as well as the advantages and limitations of using such a code are described. Facilities and techniques for making low-power coupling measurements and for making direct injection test measurements of device disturbance are also described. Some scaling laws for coupling and device effects are presented. A method for extrapolating these low-power test results to high-power full-system effects are presented.
NASA Technical Reports Server (NTRS)
Jenkins, D. W.
1972-01-01
NASA chose the watershed of Rhode River, a small sub-estuary of the Bay, as a representative test area for intensive studies of remote sensing, the results of which could be extrapolated to other estuarine watersheds around the Bay. A broad program of ecological research was already underway within the watershed, conducted by the Smithsonian Institution's Chesapeake Bay Center for Environmental Studies (CBCES) and cooperating universities. This research program offered a unique opportunity to explore potential applications for remote sensing techniques. This led to a joint NASA-CBCES project with two basic objectives: to evaluate remote sensing data for the interpretation of ecological parameters, and to provide essential data for ongoing research at the CBCES. A third objective, dependent upon realization of the first two, was to extrapolate photointerpretive expertise gained at the Rhode River watershed to other portions of the Chesapeake Bay.
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Dead time corrections using the backward extrapolation method
NASA Astrophysics Data System (ADS)
Gilad, E.; Dubi, C.; Geslot, B.; Blaise, P.; Kolin, A.
2017-05-01
Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1-2%) in restoring the corrected count rate.
Chenal, C; Legue, F; Nourgalieva, K; Brouazin-Jousseaume, V; Durel, S; Guitton, N
2000-01-01
In human radiation protection, the shape of the dose effects curve for low doses irradiation (LDI) is assumed to be linear, extrapolated from the clinical consequences of Hiroshima and Nagasaki nuclear explosions. This extrapolation probably overestimates the risk below 200 mSv. In many circumstances, the living species and cells can develop some mechanisms of adaptation. Classical epidemiological studies will not be able to answer the question and there is a need to assess more sensitive biological markers of the effects of LDI. The researches should be focused on DNA effects (strand breaks), radioinduced expression of new genes and proteins involved in the response to oxidative stress and DNA repair mechanisms. New experimental biomolecular techniques should be developed in parallel with more conventional ones. Such studies would permit to assess new biological markers of radiosensitivity, which could be of great interest in radiation protection and radio-oncology.
Unmasking the masked Universe: the 2M++ catalogue through Bayesian eyes
NASA Astrophysics Data System (ADS)
Lavaux, Guilhem; Jasche, Jens
2016-01-01
This work describes a full Bayesian analysis of the Nearby Universe as traced by galaxies of the 2M++ survey. The analysis is run in two sequential steps. The first step self-consistently derives the luminosity-dependent galaxy biases, the power spectrum of matter fluctuations and matter density fields within a Gaussian statistic approximation. The second step makes a detailed analysis of the three-dimensional large-scale structures, assuming a fixed bias model and a fixed cosmology. This second step allows for the reconstruction of both the final density field and the initial conditions at z = 1000 assuming a fixed bias model. From these, we derive fields that self-consistently extrapolate the observed large-scale structures. We give two examples of these extrapolation and their utility for the detection of structures: the visibility of the Sloan Great Wall, and the detection and characterization of the Local Void using DIVA, a Lagrangian based technique to classify structures.
Taking potential probability function maps to the local scale and matching them with land use maps
NASA Astrophysics Data System (ADS)
Garg, Saryu; Sinha, Vinayak; Sinha, Baerbel
2013-04-01
Source-Receptor models have been developed using different methods. Residence-time weighted concentration back trajectory analysis and Potential Source Contribution Function (PSCF) are the two most popular techniques for identification of potential sources of a substance in a defined geographical area. Both techniques use back trajectories calculated using global models and assign values of probability/concentration to various locations in an area. These values represent the probability of threshold exceedances / the average concentration measured at the receptor in air masses with a certain residence time over a source area. Both techniques, however, have only been applied to regional and long-range transport phenomena due to inherent limitation with respect to both spatial accuracy and temporal resolution of the of back trajectory calculations. Employing the above mentioned concepts of residence time weighted concentration back-trajectory analysis and PSCF, we developed a source-receptor model capable of identifying local and regional sources of air pollutants like Particulate Matter (PM), NOx, SO2 and VOCs. We use 1 to 30 minute averages of concentration values and wind direction and speed from a single receptor site or from multiple receptor sites to trace the air mass back in time. The model code assumes all the atmospheric transport to be Lagrangian and linearly extrapolates air masses reaching the receptor location, backwards in time for a fixed number of steps. We restrict the model run to the lifetime of the chemical species under consideration. For long lived species the model run is limited to < 4 hrs as spatial uncertainty increases the longer an air mass is linearly extrapolated back in time. The final model output is a map, which can be compared with the local land use map to pinpoint sources of different chemical substances and estimate their source strength. Our model has flexible space- time grid extrapolation steps of 1-5 minutes and 1-5 km grid resolution. By making use of high temporal resolution data, our model can produce maps for different times of the day, thus accounting for temporal changes and activity profiles of different sources. The main advantage of our approach compared to geostationary numerical methods that interpolate measured concentration values of multiple measurement sites to produce maps (gridding) is that the maps produced are more accurate in terms of spatial identification of sources. The model was applied to isoprene and meteorological data recorded during clean post-monsoon season (1 October- 7 October, 2012) between 11 am and 4 pm at a receptor site in the North-West Indo-Gangetic Plains (IISER Mohali, 30.665° N, 76.729°E, 300 m asl), near the foothills of the Himalayan range. Considering the lifetime of isoprene, the model was run only 2 hours backward in time. The map shows highest residence time weighted concentration of isoprene (up to 3.5 ppbv) over agricultural land with high number of trees (>180 trees/gridsquare); moderate concentrations for agricultural lands with low tree density (1.5-2.5 ppbv for 250 μg/m3 for traffic hotspots in Chandigarh City are observed. Based on the validation against the land use maps, the model appears to do an excellent job in source apportionment and identifying emission hotspots. Acknowledgement: We thank the IISER Mohali Atmospheric Chemistry Facility for data and the Ministry of Human Resource Development (MHRD), India and IISER Mohali for funding the facility. Chinmoy Sarkar is acknowledged for technical support, Saryu Garg thanks the Max Planck-DST India Partner Group on Tropospheric OH reactivity and VOCs for funding the research.
L.R. Iverson; E.A. Cook; R.L. Graham
1989-01-01
An approach to extending high-resolution forest cover information across large regions is presented and validated. Landsat Thematic Mapper (TM) data were classified into forest and nonforest for a portion of Jackson County, Illinois. The classified TM image was then used to determine the relationship between forest cover and the spectral signature of Advanced Very High...
Rectenna array measurement results
NASA Technical Reports Server (NTRS)
Dickinson, R. M.
1980-01-01
The measured performance characteristics of a rectenna array are reviewed and compared to the performance of a single element. It is shown that the performance may be extrapolated from the individual element to that of the collection of elements. Techniques for current and voltage combining were demonstrated. The array performance as a function of various operating parameters is characterized and techniques for overvoltage protection and automatic fault clearing in the array demonstrated. A method for detecting failed elements also exists. Instrumentation for deriving performance effectiveness is described. Measured harmonic radiation patterns and fundamental frequency scattered patterns for a low level illumination rectenna array are presented.
Confusion-limited galaxy fields. I - Simulated optical and near-infrared images
NASA Technical Reports Server (NTRS)
Chokshi, Arati; Wright, Edward L.
1988-01-01
Techniques for simulating images of galaxy fields are presented that extend to high redshifts and a surface density of galaxies high enough to produce overlapping images. The observed properties of galaxies and galaxy-ensembles in the 'local' universe are extrapolated to high redshifts using reasonable scenarios for the evolution of galaxies and their spatial distribution. This theoretical framework is then employed with Monte Carlo techniques to create fairly realistic two-dimensional distributions of galaxies plus optical and near-infrared sky images in a variety of model universes, using the appropriate density, luminosity, and angular size versus redshift relations.
A Comparison of Metamodeling Techniques via Numerical Experiments
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2016-01-01
This paper presents a comparative analysis of a few metamodeling techniques using numerical experiments for the single input-single output case. These experiments enable comparing the models' predictions with the phenomenon they are aiming to describe as more data is made available. These techniques include (i) prediction intervals associated with a least squares parameter estimate, (ii) Bayesian credible intervals, (iii) Gaussian process models, and (iv) interval predictor models. Aspects being compared are computational complexity, accuracy (i.e., the degree to which the resulting prediction conforms to the actual Data Generating Mechanism), reliability (i.e., the probability that new observations will fall inside the predicted interval), sensitivity to outliers, extrapolation properties, ease of use, and asymptotic behavior. The numerical experiments describe typical application scenarios that challenge the underlying assumptions supporting most metamodeling techniques.
O'Reilly, J; Vintró, L León; Mitchell, P I; Donohue, I; Leira, M; Hobbs, W; Irvine, K
2011-05-01
The chronologies and sediment accumulation rates for a lake sediment sequence from Lough Carra (Co. Mayo, western Ireland) were established by applying the constant initial concentration (CIC) and constant rate of supply (CRS) hypotheses to the measured (210)Pb(excess) profile. The resulting chronologies were validated using the artificial fallout radionuclides (137)Cs and (241)Am, which provide independent chronostratigraphic markers for the second half of the 20th century. The validity of extrapolating the derived CIC and CRS dates below the (210)Pb dating horizon using average sedimentation rates was investigated using supplementary paleolimnological information and historical data. Our data confirm that such an extrapolation is well justified at sites characterised by relatively stable sedimentation conditions. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valoppi, L.; Carlisle, J.; Polisini, J.
1995-12-31
A component of both human health and ecological risk assessments is the evaluation of toxicity values. A comparison between the methodology for the development of Reference Doses (RfDs) to be protective of humans, and that developed for vertebrate wildlife species is presented. For all species, a chronic No Observable Adverse Effect Level (NOAEL) is developed by applying uncertainty factors (UFs) to literature-based toxicity values. Uncertainty factors are used to compensate for the length of exposure, sensitivity of endpoints, and cross-species extrapolations between the test species and the species being assessed. Differences between human and wildlife species could include the toxicologicalmore » endpoint, the critical study, and the magnitude of the cross-species extrapolation factor. Case studies for select chemicals are presented which contrast RfDs developed for humans and those developed for avian and mammalian wildlife.« less
Aquatic effects assessment: needs and tools.
Marchini, Silvia
2002-01-01
In the assessment of the adverse effects pollutants can produce on exposed ecosystems, different approaches can be followed depending on the quality and quantity of information available, whose advantages and limits are discussed with reference to the aquatic compartment. When experimental data are lacking, a predictive approach can be pursued by making use of validated quantitative structure-activity relationships (QSARs), which provide reliable ecotoxicity estimates only if appropriate models are applied. The experimental approach is central to any environmental hazard assessment procedure, although many uncertainties underlying the extrapolation from a limited set of single species laboratory data to the complexity of the ecosystem (e.g., the limitations of common summary statistics, the variability of species sensitivity, the need to consider alterations at higher level of integration) make the task difficult. When adequate toxicity information are available, the statistical extrapolation approach can be used to predict environmental compatible concentrations.
Use of ALS data for digital terrain extraction and roughness parametrization in floodplain areas
NASA Astrophysics Data System (ADS)
Idda, B.; Nardinocchi, C.; Marsella, M.
2009-04-01
In order to undertake structural and land planning actions aimed at improving risk thresholds and vulnerability associated to floodplain inundation, the evaluation of the area concerning the channel overflowing from his natural embankments it is of essential importance. Floodplain models requires the analysis of historical floodplains extensions, ground's morphological structure and hydraulic measurements. Within this set of information, a more detailed characterization about the hydraulic roughness, which controls the velocity to the hydraulic flow, is a interesting challenge to achieve a 2D spatial distribution into the model. Remote sensing optical and radar sensors techniques can be applied to generate 2D and 3D map products useful to perimeter floodplains extension during the main event and extrapolate river cross-sections. Among these techniques, it is unquestionable the enhancement that the Airborne Laser Scanner (ALS) have brought for its capability to extract high resolution and accurate Digital Terrain Models. In hydraulic applications, a number of studies investigated the use of ALS for DTM generation and approached the quantitative estimations of the hydraulic roughness. The aim of this work is the generation of a digital terrain model and the estimation of hydraulic parameters useful for floodplains models from Airborne Laser Scanner data collected in a test area, which encloses a portion of a drainage basin of the Mela river (Sicily, Italy). From the Airborne Laser Scanner dataset, a high resolution Digital Elevation Model was first created, then after applying filtering and classification processes, a dedicated procedure was implemented to assess automatically a value for the hydraulic roughness coefficient (in Manning's formulation) per each point interested in the floodplain. The obtained results allowed to generate maps of equal roughness, hydraulic level depending, based on the application of empirical formulas for specific-type vegetation at each classified ALS point.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manwaring, John, E-mail: manwaring.jd@pg.com; Rothe, Helga; Obringer, Cindy
Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passagemore » through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human skin explants and HaCaT • Systemic metabolism was modeled using hepatocyte cultures. • Toxicokinetically relevant parameters were applied to estimate systemic exposure. • There was a good agreement between in vitro and in vivo data.« less
Specific heat in KFe2As2 in zero and applied magnetic field
NASA Astrophysics Data System (ADS)
Kim, J. S.; Kim, E. G.; Stewart, G. R.; Chen, X. H.; Wang, X. F.
2011-05-01
The specific heat down to 0.08 K of the iron pnictide superconductor KFe2As2 was measured on a single-crystal sample with a residual resistivity ratio of ˜650, with a Tconset determined by a specific heat of 3.7 K. The zero-field normal-state specific heat divided by temperature, C/T, was extrapolated from above Tc to T=0 by insisting on agreement between the extrapolated normal-state entropy at Tc, Snextrap(Tc), and the measured superconducting-state entropy at Tc, Ssmeas(Tc), since for a second-order phase transition the two entropies must be equal. This extrapolation would indicate that this rather clean sample of KFe2As2 exhibits non-Fermi-liquid behavior; i.e., C/T increases at low temperatures, in agreement with the reported non-Fermi-liquid behavior in the resistivity. However, specific heat as a function of magnetic field shows that the shoulder feature around 0.7 K, which is commonly seen in KFe2As2 samples, is not evidence for a second superconducting gap as has been previously proposed but instead is due to an unknown magnetic impurity phase, which can affect the entropy balance and the extrapolation of the normal-state specific heat. This peak (somewhat larger in magnitude) with similar field dependence is also found in a less pure sample of KFe2As2, with a residual resistivity ratio of only 90 and Tconset=3.1 K. These data, combined with the measured normal-state specific heat in field to suppress superconductivity, allow the conclusion that an increase in the normal-state specific heat as T→0 is in fact not seen in KFe2As2; i.e., Fermi-liquid behavior is observed.
Special Features of Galactic Dynamics
NASA Astrophysics Data System (ADS)
Efthymiopoulos, Christos; Voglis, Nikos; Kalapotharakos, Constantinos
This is an introductory article to some basic notions and currently open problems of galactic dynamics. The focus is on topics mostly relevant to the so-called `new methods' of celestial mechanics or Hamiltonian dynamics, as applied to the ellipsoidal components of galaxies, i.e., to the elliptical galaxies and to the dark halos and bulges of disk galaxies. Traditional topics such as Jeans theorem, the role of a `third integral' of motion, Nekhoroshev theory, violent relaxation, and the statistical mechanics of collisionless stellar systems are first discussed. The emphasis is on modern extrapolations of these old topics. Recent results from orbital and global dynamical studies of galaxies are then shortly reviewed. The role of various families of orbits in supporting self-consistency, as well as the role of chaos in galaxies, are stressed. A description is then given of the main numerical techniques of integration of the N-body problem in the framework of stellar dynamics and of the results obtained via N-Body experiments. A final topic is the secular evolution and self-organization of galactic systems.
A new PIC noise reduction technique
NASA Astrophysics Data System (ADS)
Barnes, D. C.
2014-10-01
Numerical solution of the Vlasov equation is considered in a general situation in which there is an underlying static solution (equilibrium). There are no further assumptions about dimensionality, smallenss of orbits, or disparate time scales. The semi-characteristic (SC) method for Vlasov solution is described. The usual characteristics of the equation, which are the single particle orbits, are modified in such a way that the equilibrium phase-space flow is removed. In this way, the shot noise introduced by the usual discrete particle representation of the equilibrium is static in time and can be removed completely by subtraction. An almost exact algorithm for this is based on the observation that a (infinitesimal or) discrete time step of any equilibrium MC realization is again a realization of the equilibrium, building up strings of associated simulation particles. In this way, the only added discretization error arises from the need to extrapolate backward in time the chain end points one dt using a canonical transformation. Previously developed energy-conserving time-implicit methods are applied without modification. 1D ES examples of Landau damping and velocity-space instability are given to illustrate the method.
The R-matrix investigation of 8Li(α, n)11B reaction below 6 MeV
NASA Astrophysics Data System (ADS)
Kilic, Ali Ihsan; Muecher, Dennis; Garret, Paul; Svensson, Carl
2017-09-01
The investigation of cross sections for the 8Li(α, n)11B reaction has important impact for both primordial nucleosynthesis in the inhomogeneous models as well as constraining the physical conditions characterizing the r-process. However, there are large discrepancies existing between inclusive and exclusive measurements of the cross section below 3 MeV. The R-Matrix technique is a powerful tool for the analysis of the nuclear data for the purpose of extracting level information of compound nucleus 12B and extrapolation of the astrophysical S-Factor to Gamow energies. We have applied the R-matrix calculations for the 8Li(α, n)11B reaction and will present results for both the reaction rates and the partial S-factor. Combining the direct reaction contribution with the results from our R-matrix calculations, we can well describe the experimental data from the inclusive measurements. However, new experiments are needed in order to understand the role of neutron detection close to the threshold, for which we describe our experimental plans at ISAC, TRIUMF, using the newly developed DESCANT array.
Are Polar Field Magnetic Flux Concentrations Responsible for Missing Interplanetary Flux?
NASA Astrophysics Data System (ADS)
Linker, Jon A.; Downs, C.; Mikic, Z.; Riley, P.; Henney, C. J.; Arge, C. N.
2012-05-01
Magnetohydrodynamic (MHD) simulations are now routinely used to produce models of the solar corona and inner heliosphere for specific time periods. These models typically use magnetic maps of the photospheric magnetic field built up over a solar rotation, available from a number of ground-based and space-based solar observatories. The line-of-sight field at the Sun's poles is poorly observed, and the polar fields in these maps are filled with a variety of interpolation/extrapolation techniques. These models have been found to frequently underestimate the interplanetary magnetic flux (Riley et al., 2012, in press, Stevens et al., 2012, in press) near the minimum part of the cycle unless mitigating correction factors are applied. Hinode SOT observations indicate that strong concentrations of magnetic flux may be present at the poles (Tsuneta et al. 2008). The ADAPT flux evolution model (Arge et al. 2010) also predicts the appearance of such concentrations. In this paper, we explore the possibility that these flux concentrations may account for a significant amount of magnetic flux and alleviate discrepancies in interplanetary magnetic flux predictions. Research supported by AFOSR, NASA, and NSF.
van Erp, Y H; Koopmans, M J; Heirbaut, P R; van der Hoeven, J C; Weterings, P J
1992-06-01
A new method is described to investigate unscheduled DNA synthesis (UDS) in human tissue after exposure in vitro: the human hair follicle. A histological technique was applied to assess cytotoxicity and UDS in the same hair follicle cells. UDS induction was examined for 11 chemicals and the results were compared with literature findings for UDS in rat hepatocytes. Most chemicals inducing UDS in rat hepatocytes raised DNA repair at comparable concentrations in the hair follicle. However, 1 of 9 chemicals that gave a positive response in the rat hepatocyte UDS test, 2-acetylaminofluorene, failed to induce DNA repair in the hair follicle. Metabolizing potential of hair follicle cells was shown in experiments with indirectly acting compounds, i.e., benzo[a]pyrene, 7,12-dimethylbenz[a]anthracene and dimethylnitrosamine. The results support the conclusion that the test in its present state is valuable as a screening assay for the detection of unscheduled DNA synthesis. Moreover, the use of human tissues may result in a better extrapolation to man.
New Micro-Method for Prediction of Vapor Pressure of Energetic Materials
2014-07-01
temperature is recorded as the extrapolated onset temperature (11–12). • Gas chromatography (GC) headspace analysis requires the establishment of an...J. L.; Shinde, K.; Moran, J. Determination of the Vapor Density of Triacetone Triperoxide (TATP) Using a Gas Chromatography Headspace Technique...Propellants Explos. Pyrotech. 2005, 30 (2), 127–30. 14. Chickos, J. S. Sublimation Vapor Pressures as Evaluated by Correlation- Gas Chromatography . J
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has also been modified to read standardized disdrometer data format (Joss-Waldvogel format). Other modifications to the software involve accounting for vertical ambient wind motion, as well as evaporation of the raindrop during its flight time.
The forecast for RAC extrapolation: mostly cloudy.
Goldman, Elizabeth; Jacobs, Robert; Scott, Ellen; Scott, Bonnie
2011-09-01
The current statutory and regulatory guidance for recovery audit contractor (RAC) extrapolation leaves providers with minimal protection against the process and a limited ability to challenge overpayment demands. Providers not only should understand the statutory and regulatory basis for extrapolation forecast, but also should be able to assess their extrapolation risk and their recourse through regulatory safeguards against contractor error. Providers also should aggressively appeal all incorrect RAC denials to minimize the potential impact of extrapolation.
Flight test derived heating math models for critical locations on the orbiter during reentry
NASA Technical Reports Server (NTRS)
Hertzler, E. K.; Phillips, P. W.
1983-01-01
An analysis technique was developed for expanding the aerothermodynamic envelope of the Space Shuttle without subjecting the vehicle to sustained flight at more stressing heating conditions. A transient analysis program was developed to take advantage of the transient maneuvers that were flown as part of this analysis technique. Heat rates were derived from flight test data for various locations on the orbiter. The flight derived heat rates were used to update heating models based on predicted data. Future missions were then analyzed based on these flight adjusted models. A technique for comparing flight and predicted heating rate data and the extrapolation of the data to predict the aerothermodynamic environment of future missions is presented.
A Sub-filter Scale Noise Equation far Hybrid LES Simulations
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.
[Progress in transgenic fish techniques and application].
Ye, Xing; Tian, Yuan-Yuan; Gao, Feng-Ying
2011-05-01
Transgenic technique provides a new way for fish breeding. Stable lines of growth hormone gene transfer carps, salmon and tilapia, as well as fluorescence protein gene transfer zebra fish and white cloud mountain minnow have been produced. The fast growth characteristic of GH gene transgenic fish will be of great importance to promote aquaculture production and economic efficiency. This paper summarized the progress in transgenic fish research and ecological assessments. Microinjection is still the most common used method, but often resulted in multi-site and multi-copies integration. Co-injection of transposon or meganuclease will greatly improve the efficiency of gene transfer and integration. "All fish" gene or "auto gene" should be considered to produce transgenic fish in order to eliminate misgiving on food safety and to benefit expression of the transferred gene. Environmental risk is the biggest obstacle for transgenic fish to be commercially applied. Data indicates that transgenic fish have inferior fitness compared with the traditional domestic fish. However, be-cause of the genotype-by-environment effects, it is difficult to extrapolate simple phenotypes to the complex ecological interactions that occur in nature based on the ecological consequences of the transgenic fish determined in the laboratory. It is critical to establish highly naturalized environments for acquiring reliable data that can be used to evaluate the environ-mental risk. Efficacious physical and biological containment strategies remain to be crucial approaches to ensure the safe application of transgenic fish technology.
Approach for extrapolating in vitro metabolism data to refine bioconcentration factor estimates.
Cowan-Ellsberry, Christina E; Dyer, Scott D; Erhardt, Susan; Bernhard, Mary Jo; Roe, Amy L; Dowty, Martin E; Weisbrod, Annie V
2008-02-01
National and international chemical management programs are assessing thousands of chemicals for their persistence, bioaccumulative and environmental toxic properties; however, data for evaluating the bioaccumulation potential for fish are limited. Computer based models that account for the uptake and elimination processes that contribute to bioaccumulation may help to meet the need for reliable estimates. One critical elimination process of chemicals is metabolic transformation. It has been suggested that in vitro metabolic transformation tests using fish liver hepatocytes or S9 fractions can provide rapid and cost-effective measurements of fish metabolic potential, which could be used to refine bioconcentration factor (BCF) computer model estimates. Therefore, recent activity has focused on developing in vitro methods to measure metabolic transformation in cellular and subcellular fish liver fractions. A method to extrapolate in vitro test data to the whole body metabolic transformation rates is presented that could be used to refine BCF computer model estimates. This extrapolation approach is based on concepts used to determine the fate and distribution of drugs within the human body which have successfully supported the development of new pharmaceuticals for years. In addition, this approach has already been applied in physiologically-based toxicokinetic models for fish. The validity of the in vitro to in vivo extrapolation is illustrated using the rate of loss of parent chemical measured in two independent in vitro test systems: (1) subcellular enzymatic test using the trout liver S9 fraction, and (2) primary hepatocytes isolated from the common carp. The test chemicals evaluated have high quality in vivo BCF values and a range of logK(ow) from 3.5 to 6.7. The results show very good agreement between the measured BCF and estimated BCF values when the extrapolated whole body metabolism rates are included, thus suggesting that in vitro biotransformation data could effectively be used to reduce in vivo BCF testing and refine BCF model estimates. However, additional fish physiological data for parameterization and validation for a wider range of chemicals are needed.
Development of a primary standard for absorbed dose from unsealed radionuclide solutions
NASA Astrophysics Data System (ADS)
Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.
2016-12-01
Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k = 1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k = 2).
Detecting, anticipating, and predicting critical transitions in spatially extended systems.
Kwasniok, Frank
2018-03-01
A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.
NASA Astrophysics Data System (ADS)
Bollmann, J.; Brabec, B.
2001-12-01
Abundance and assemblage compositions of microplankton, together with their chemical and stable isotopic composition, have been among the most successful methods in paleoceanography. One of the most frequently applied techniques for reconstruction of paleo-temperature is a transfer function using the relative abundance of planktic foraminifera in sediment samples. Here we present evidence, suggesting that absolute sea surface temperature for a given location can be also calculated from the relative abundance of Gephyrocapsa morphotypes in sediment samples with an accuracy comparable to foraminifera transfer functions. By extrapolating this finding, paleo-enviromental interpretations can be obtained for the Late Pleistocene and discrepancies between the different currently used methods (e.g., foraminifer, alkenone and Ca/Mg derived temperature estimates) might be resolved. Eighty-one Holocene sediment samples were selected from the Pacific, Indian and Atlantic Oceans covering a temperature gradient from 13.4° C to 29.4° C, a salinity gradient from 32.21 to 37.34 and a productivity gradient of 0.045 to 0.492μ g chlorophyll/L. Standard multiple linear regression analyses were applied to this data set, linking the relative abundance of Gephyrocapsa morphotypes to mean sea surface temperature. The best model revealed an r2 of 0.8 with a standard residual error of 1.8° C for calculation of the mean sea surface temperature.
Detecting, anticipating, and predicting critical transitions in spatially extended systems
NASA Astrophysics Data System (ADS)
Kwasniok, Frank
2018-03-01
A data-driven linear framework for detecting, anticipating, and predicting incipient bifurcations in spatially extended systems based on principal oscillation pattern (POP) analysis is discussed. The dynamics are assumed to be governed by a system of linear stochastic differential equations which is estimated from the data. The principal modes of the system together with corresponding decay or growth rates and oscillation frequencies are extracted as the eigenvectors and eigenvalues of the system matrix. The method can be applied to stationary datasets to identify the least stable modes and assess the proximity to instability; it can also be applied to nonstationary datasets using a sliding window approach to track the changing eigenvalues and eigenvectors of the system. As a further step, a genuinely nonstationary POP analysis is introduced. Here, the system matrix of the linear stochastic model is time-dependent, allowing for extrapolation and prediction of instabilities beyond the learning data window. The methods are demonstrated and explored using the one-dimensional Swift-Hohenberg equation as an example, focusing on the dynamics of stochastic fluctuations around the homogeneous stable state prior to the first bifurcation. The POP-based techniques are able to extract and track the least stable eigenvalues and eigenvectors of the system; the nonstationary POP analysis successfully predicts the timing of the first instability and the unstable mode well beyond the learning data window.
Progress in extrapolating divertor heat fluxes towards large fusion devices
NASA Astrophysics Data System (ADS)
Sieglin, B.; Faitsch, M.; Eich, T.; Herrmann, A.; Suttrop, W.; Collaborators, JET; the MST1 Team; the ASDEX Upgrade Team
2017-12-01
Heat load to the plasma facing components is one of the major challenges for the development and design of large fusion devices such as ITER. Nowadays fusion experiments can operate with heat load mitigation techniques, e.g. sweeping, impurity seeding, but do not generally require it. For large fusion devices however, heat load mitigation will be essential. This paper presents the current progress of the extrapolation of steady state and transient heat loads towards large fusion devices. For transient heat loads, so-called edge localized modes are considered a serious issue for the lifetime of divertor components. In this paper, the ITER operation at half field (2.65 T) and half current (7.5 MA) will be discussed considering the current material limit for the divertor peak energy fluence of 0.5 {MJ}/{{{m}}}2. Recent studies were successful in describing the observed energy fluence in the JET, MAST and ASDEX Upgrade using the pedestal pressure prior to the ELM crash. Extrapolating this towards ITER results in a more benign heat load compared to previous scalings. In the presence of magnetic perturbation, the axisymmetry is broken and a 2D heat flux pattern is induced on the divertor target, leading to local increase of the heat flux which is a concern for ITER. It is shown that for a moderate divertor broadening S/{λ }{{q}}> 0.5 the toroidal peaking of the heat flux disappears.
Application of CCD drift-scan photoelectric technique on monitoring GEO satellites
NASA Astrophysics Data System (ADS)
Yu, Yong; Zhao, Xiao-Fen; Luo, Hao; Mao, Yin-Dun; Tang, Zheng-Hong
2018-05-01
Geosynchronous Earth Orbit (GEO) satellites are widely used because of their unique characteristics of high-orbit and remaining permanently in the same area of the sky. Precise monitoring of GEO satellites can provide a key reference for the judgment of satellite operation status, the capture and identification of targets, and the analysis of collision warning. The observation using ground-based optical telescopes plays an important role in the field of monitoring GEO targets. Different from distant celestial bodies, there is a relative movement between the GEO target and the background reference stars, which makes the conventional observation method limited for long focal length telescopes. CCD drift-scan photoelectric technique is applied on monitoring GEO targets. In the case of parking the telescope, the good round images of the background reference stars and the GEO target at the same sky region can be obtained through the alternating observation of CCD drift-scan mode and CCD stare mode, so as to improve the precision of celestial positioning for the GEO target. Observation experiments of GEO targets were carried out with 1.56-meter telescope of Shanghai Astronomical Observatory. The results show that the application of CCD drift-scan photoelectric technique makes the precision of observing the GEO target reach the level of 0.2″, which gives full play to the advantage of the long focal length of the telescope. The effect of orbit improvement based on multi-pass of observations is obvious and the prediction precision of extrapolating to 72-h is in the order of several arc seconds in azimuth and elevation.
Polarizabilities of highly ionized atoms
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Wolf, M. L.
1979-01-01
An extrapolation method based on a screening approximation, applied to available initial values of polarizability for low stages of ionization, is used to obtain dipole and quadrupole polarizabilities for more highly ionized members of many isoelectronic sequences. It is suggested that the derived screening constants x sub L and limiting ratios F sub L may have significant physical meaning, especially the latter which may have an interpretation in terms of hydrogenic polarizabilities.
Potential SPOT-1 R/B-Cosmos 1680 R/B collision
NASA Technical Reports Server (NTRS)
Henize, Karl G.; Rast, Richard H.
1989-01-01
Detailed NORAD data have revealed updated orbital elements for the Ariane third-stage rocket body that underwent breakup on November 13, 1986, as well as for the Cosmos 1680 rocket body. Applying the maximum expected error due to the extrapolation of orbital elements to the date of the possible collision between the two bodies shows the smallest possible distance between bodies to have been 380 km, thereby precluding collision.
Coelioscopic and Endoscope-Assisted Sterilization of Chelonians.
Proença, Laila M; Divers, Stephen J
2015-09-01
Elective sterilization is a safe and well-established surgical procedure performed in dogs and cats worldwide. Conversely, chelonian sterilization has been mostly performed therapeutically, because of the intricate anatomy and difficult access to the reproductive organs, and consequently, reproductive problems and diseases remain common. With the advance of veterinary endoscopy, novel techniques of soft tissue prefemoral coelioscopic and endoscope-assisted sterilization have been published, and preventative chelonian sterilization is now a reality. Nevertheless, extrapolations between species should be carefully considered, and further studies are warranted. This article summarizes and describes the current coelioscopic and coelioscope-assisted sterilization techniques for chelonia. Copyright © 2015 Elsevier Inc. All rights reserved.
Rectenna array measurement results. [Satellite power transmission and reception
NASA Technical Reports Server (NTRS)
Dickinson, R. M.
1980-01-01
The measured performance characteristics of a rectenna array are reviewed and compared to the performance of a single element. It is shown that the performance may be extrapolated from the individual element to that of the collection of elements. Techniques for current and voltage combining are demonstrated. The array performance as a function of various operating parameters is characterized and techniques for overvoltage protection and automatic fault clearing in the array are demonstrated. A method for detecting failed elements also exists. Instrumentation for deriving performance effectiveness is described. Measured harmonic radiation patterns and fundamental frequency scattered patterns for a low level illumination rectenna array are presented.
Zhong, Sheng-hua; Ma, Zheng; Wilson, Colin; Liu, Yan; Flombaum, Jonathan I
2014-01-01
Intuitively, extrapolating object trajectories should make visual tracking more accurate. This has proven to be true in many contexts that involve tracking a single item. But surprisingly, when tracking multiple identical items in what is known as “multiple object tracking,” observers often appear to ignore direction of motion, relying instead on basic spatial memory. We investigated potential reasons for this behavior through probabilistic models that were endowed with perceptual limitations in the range of typical human observers, including noisy spatial perception. When we compared a model that weights its extrapolations relative to other sources of information about object position, and one that does not extrapolate at all, we found no reliable difference in performance, belying the intuition that extrapolation always benefits tracking. In follow-up experiments we found this to be true for a variety of models that weight observations and predictions in different ways; in some cases we even observed worse performance for models that use extrapolations compared to a model that does not at all. Ultimately, the best performing models either did not extrapolate, or extrapolated very conservatively, relying heavily on observations. These results illustrate the difficulty and attendant hazards of using noisy inputs to extrapolate the trajectories of multiple objects simultaneously in situations with targets and featurally confusable nontargets. PMID:25311300
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Slaughter, Andrew R; Palmer, Carolyn G; Muller, Wilhelmine J
2007-04-01
In aquatic ecotoxicology, acute to chronic ratios (ACRs) are often used to predict chronic responses from available acute data to derive water quality guidelines, despite many problems associated with this method. This paper explores the comparative protectiveness and accuracy of predicted guideline values derived from the ACR, linear regression analysis (LRA), and multifactor probit analysis (MPA) extrapolation methods applied to acute toxicity data for aquatic macroinvertebrates. Although the authors of the LRA and MPA methods advocate the use of extrapolated lethal effects in the 0.01% to 10% lethal concentration (LC0.01-LC10) range to predict safe chronic exposure levels to toxicants, the use of an extrapolated LC50 value divided by a safety factor of 5 was in addition explored here because of higher statistical confidence surrounding the LC50 value. The LRA LC50/5 method was found to compare most favorably with available experimental chronic toxicity data and was therefore most likely to be sufficiently protective, although further validation with the use of additional species is needed. Values derived by the ACR method were the least protective. It is suggested that there is an argument for the replacement of ACRs in developing water quality guidelines by the LRA LC50/5 method.
Diego A. Riveros-Iregui; Brian L. McGlynn; Howard E. Epstein; Daniel L. Welsch
2008-01-01
Soil CO2 efflux is a large respiratory flux from terrestrial ecosystems and a critical component of the global carbon (C) cycle. Lack of process understanding of the spatiotemporal controls on soil CO2 efflux limits our ability to extrapolate from fluxes measured at point scales to scales useful for corroboration with other ecosystem level measures of C exchange....
Empirical single sample quantification of bias and variance in Q-ball imaging.
Hainline, Allison E; Nath, Vishwesh; Parvathaneni, Prasanna; Blaber, Justin A; Schilling, Kurt G; Anderson, Adam W; Kang, Hakmook; Landman, Bennett A
2018-02-06
The bias and variance of high angular resolution diffusion imaging methods have not been thoroughly explored in the literature and may benefit from the simulation extrapolation (SIMEX) and bootstrap techniques to estimate bias and variance of high angular resolution diffusion imaging metrics. The SIMEX approach is well established in the statistics literature and uses simulation of increasingly noisy data to extrapolate back to a hypothetical case with no noise. The bias of calculated metrics can then be computed by subtracting the SIMEX estimate from the original pointwise measurement. The SIMEX technique has been studied in the context of diffusion imaging to accurately capture the bias in fractional anisotropy measurements in DTI. Herein, we extend the application of SIMEX and bootstrap approaches to characterize bias and variance in metrics obtained from a Q-ball imaging reconstruction of high angular resolution diffusion imaging data. The results demonstrate that SIMEX and bootstrap approaches provide consistent estimates of the bias and variance of generalized fractional anisotropy, respectively. The RMSE for the generalized fractional anisotropy estimates shows a 7% decrease in white matter and an 8% decrease in gray matter when compared with the observed generalized fractional anisotropy estimates. On average, the bootstrap technique results in SD estimates that are approximately 97% of the true variation in white matter, and 86% in gray matter. Both SIMEX and bootstrap methods are flexible, estimate population characteristics based on single scans, and may be extended for bias and variance estimation on a variety of high angular resolution diffusion imaging metrics. © 2018 International Society for Magnetic Resonance in Medicine.
Exploration of solar photospheric magnetic field data sets using the UCSD tomography
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H.-S.; Buffington, A.; Hick, P. P.; Nishimura, N.; Nozaki, N.; Tokumaru, M.; Fujiki, K.; Hayashi, K.
2016-12-01
This article investigates the use of two different types of National Solar Observatory magnetograms and two different coronal field modeling techniques over 10 years. Both the "open-field" Current Sheet Source Surface (CSSS) and a "closed-field" technique using CSSS modeling are compared. The University of California, San Diego, tomographic modeling, using interplanetary scintillation data from Japan, provides the global velocities to extrapolate these fields outward, which are then compared with fields measured in situ near Earth. Although the open-field technique generally gives a better result for radial and tangential fields, we find that a portion of the closed extrapolated fields measured in situ near Earth comes from the direct outward mapping of these fields in the low solar corona. All three closed-field components are nonzero at 1 AU and are compared with the appropriate magnetometer values. A significant positive correlation exists between these closed-field components and the in situ measurements over the last 10 years. We determine that a small fraction of the static low-coronal component flux, which includes the Bn (north-south) component, regularly escapes from closed-field regions. The closed-field flux fraction varies by about a factor of 3 from a mean value during this period, relative to the magnitude of the field components measured in situ near Earth, and maximizes in 2014. This implies that a relatively more efficient process for closed-flux escape occurs near solar maximum. We also compare and find that the popular Potential Field Source Surface and CSSS model closed fields are nearly identical in sign and strength.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, T
Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of {sup 125}I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x{sub 0}. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit tomore » the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x{sub 1}, x{sub 2}) is applied to the difference of charges measured at the two plate separations x{sub 1} and x{sub 2}. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x{sub 1} and x{sub 2}, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.« less
NASA Technical Reports Server (NTRS)
Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.
1974-01-01
The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.
First Direct Measurement of C 12 ( C 12 , n ) Mg 23 at Stellar Energies
Bucher, B.; Tang, X. D.; Fang, X.; ...
2015-06-25
Neutrons produced by the carbon fusion reaction 12C( 12C,n) 23Mg play an important role in stellar nucleosynthesis. However, past studies have shown large discrepancies between experimental data and theory, leading to an uncertain cross section extrapolation at astrophysical energies. Here in this paper, we present the first direct measurement that extends deep into the astrophysical energy range along with a new and improved extrapolation technique based on experimental data from the mirror reaction 12C( 12C,p) 23Na . The new reaction rate has been determined with a well-defined uncertainty that exceeds the precision required by astrophysics models. Using our constrained rate,more » we find that 12C ( 12C,n) 23Mg is crucial to the production of Na and Al in pop-III pair instability supernovae. It also plays a nonnegligible role in the production of weak s -process elements, as well as in the production of the important galactic γ-ray emitter 60Fe.« less
Terada, Takatoshi; Ohtsubo, Toshiro; Iwao, Yasunori; Noguchi, Shuji; Itai, Shigeru
2017-01-01
The purpose of this study was to develop a deeper understanding of the key physicochemical parameters involved in the release profiles of microsphere-encapsulated agrochemicals at different temperatures. Microspheres consisting of different polyurethanes (PUs) were prepared using our previously reported solventless microencapsulation technique. Notably, these microspheres exhibited considerable differences in their thermodynamic characteristics, including their glass transition temperature (T g ), extrapolated onset temperature (T o ) and extrapolated end temperature (T e ). At test temperatures below the T o of the PU, only 5-10% of the agrochemical was rapidly released from the microspheres within 1 d, and none was released thereafter. However, at test temperatures above the T o of the PU, the rate of agrochemical release gradually increased with increasing temperatures, and the rate of release from the microspheres was dependent on the composition of the PU. Taken together, these results show that the release profiles of the microspheres were dependent on their thermodynamic characteristics and changes in their PU composition.
NASA Astrophysics Data System (ADS)
Rolley, Matthew H.; Sweet, Tracy K. N.; Min, Gao
2017-09-01
This work demonstrates a new technique that capitalizes on the inherent flexibility of the thermoelectric module to provide a multifunctional platform, and exhibits a unique advantage only available within CPV-TE hybrid architectures. This system is the first to use the thermoelectric itself for hot-side temperature feedback to a PID control system, needing no additional thermocouple or thermistor to be attached to the cell - eliminating shading, and complex mechanical designs for mounting. Temperature measurement accuracy and thermoelectric active cooling functionality is preserved. Dynamic "per-cell" condition monitoring and protection is feasible using this technique, with direct cell-specific temperature measurement accurate to 1°C demonstrated over the entire experimental range. The extrapolation accuracy potential of the technique was also evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn; CAEP Software Center for High Performance Numerical Simulation, Beijing
2016-06-28
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps ormore » more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.« less
Selishchev, S V
2004-01-01
The integration results of fundamental and applied medical-and-technical research made at the chair of biomedical systems, Moscow state institute of electronic engineering (technical university--MSIEE), are described in the paper. The chair is guided in its research activity by the traditions of higher education in Russia in the field of biomedical electronics and biomedical engineering. Its activities are based on the extrapolation of methods of electronic tools, computer technologies, physics, biology and medicine with due respect being paid to the requirements of practical medicine and to topical issues of research and design.
Fechter, Dominik; Storch, Ilse
2014-01-01
Due to legislative protection, many species, including large carnivores, are currently recolonizing Europe. To address the impending human-wildlife conflicts in advance, predictive habitat models can be used to determine potentially suitable habitat and areas likely to be recolonized. As field data are often limited, quantitative rule based models or the extrapolation of results from other studies are often the techniques of choice. Using the wolf (Canis lupus) in Germany as a model for habitat generalists, we developed a habitat model based on the location and extent of twelve existing wolf home ranges in Eastern Germany, current knowledge on wolf biology, different habitat modeling techniques and various input data to analyze ten different input parameter sets and address the following questions: (1) How do a priori assumptions and different input data or habitat modeling techniques affect the abundance and distribution of potentially suitable wolf habitat and the number of wolf packs in Germany? (2) In a synthesis across input parameter sets, what areas are predicted to be most suitable? (3) Are existing wolf pack home ranges in Eastern Germany consistent with current knowledge on wolf biology and habitat relationships? Our results indicate that depending on which assumptions on habitat relationships are applied in the model and which modeling techniques are chosen, the amount of potentially suitable habitat estimated varies greatly. Depending on a priori assumptions, Germany could accommodate between 154 and 1769 wolf packs. The locations of the existing wolf pack home ranges in Eastern Germany indicate that wolves are able to adapt to areas densely populated by humans, but are limited to areas with low road densities. Our analysis suggests that predictive habitat maps in general, should be interpreted with caution and illustrates the risk for habitat modelers to concentrate on only one selection of habitat factors or modeling technique. PMID:25029506
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
Sargent, Daniel J.; Buyse, Marc; Burzykowski, Tomasz
2011-01-01
SUMMARY Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. PMID:21838732
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
NASA Astrophysics Data System (ADS)
Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe
2018-01-01
In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.
Shuman, Nicholas S; Miller, Thomas M; Viggiano, Albert A; Troe, Jürgen
2013-05-28
Thermal rate constants and product branching fractions for electron attachment to CF3Br and the CF3 radical have been measured over the temperature range 300-890 K, the upper limit being restricted by thermal decomposition of CF3Br. Both measurements were made in Flowing Afterglow Langmuir Probe apparatuses; the CF3Br measurement was made using standard techniques, and the CF3 measurement using the Variable Electron and Neutral Density Attachment Mass Spectrometry technique. Attachment to CF3Br proceeds exclusively by the dissociative channel yielding Br(-), with a rate constant increasing from 1.1 × 10(-8) cm(3) s(-1) at 300 K to 5.3 × 10(-8) cm(3) s(-1) at 890 K, somewhat lower than previous data at temperatures up to 777 K. CF3 attachment proceeds through competition between associative attachment yielding CF3 (-) and dissociative attachment yielding F(-). Prior data up to 600 K showed the rate constant monotonically increasing, with the partial rate constant of the dissociative channel following Arrhenius behavior; however, extrapolation of the data using a recently proposed kinetic modeling approach predicted the rate constant to turn over at higher temperatures, despite being only ~5% of the collision rate. The current data agree well with the previous kinetic modeling extrapolation, providing a demonstration of the predictive capabilities of the approach.
A citizen science based survey method for estimating the density of urban carnivores.
Scott, Dawn M; Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W; Mill, Aileen C; Smith, Graham C; Tolhurst, Bryony A
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980's. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness.
A citizen science based survey method for estimating the density of urban carnivores
Baker, Rowenna; Charman, Naomi; Karlsson, Heidi; Yarnell, Richard W.; Mill, Aileen C.; Smith, Graham C.; Tolhurst, Bryony A.
2018-01-01
Globally there are many examples of synanthropic carnivores exploiting growth in urbanisation. As carnivores can come into conflict with humans and are potential vectors of zoonotic disease, assessing densities in suburban areas and identifying factors that influence them are necessary to aid management and mitigation. However, fragmented, privately owned land restricts the use of conventional carnivore surveying techniques in these areas, requiring development of novel methods. We present a method that combines questionnaire distribution to residents with field surveys and GIS, to determine relative density of two urban carnivores in England, Great Britain. We determined the density of: red fox (Vulpes vulpes) social groups in 14, approximately 1km2 suburban areas in 8 different towns and cities; and Eurasian badger (Meles meles) social groups in three suburban areas of one city. Average relative fox group density (FGD) was 3.72 km-2, which was double the estimates for cities with resident foxes in the 1980’s. Density was comparable to an alternative estimate derived from trapping and GPS-tracking, indicating the validity of the method. However, FGD did not correlate with a national dataset based on fox sightings, indicating unreliability of the national data to determine actual densities or to extrapolate a national population estimate. Using species-specific clustering units that reflect social organisation, the method was additionally applied to suburban badgers to derive relative badger group density (BGD) for one city (Brighton, 2.41 km-2). We demonstrate that citizen science approaches can effectively obtain data to assess suburban carnivore density, however publicly derived national data sets need to be locally validated before extrapolations can be undertaken. The method we present for assessing densities of foxes and badgers in British towns and cities is also adaptable to other urban carnivores elsewhere. However this transferability is contingent on species traits meeting particular criteria, and on resident responsiveness. PMID:29787598
Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N
2015-03-01
Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available.
Interspecies extrapolation encompasses two related but distinct topic areas that are germane to quantitative extrapolation and hence computational toxicology-dose scaling and parameter scaling. Dose scaling is the process of converting a dose determined in an experimental animal ...
Superresolution SAR Imaging Algorithm Based on Mvm and Weighted Norm Extrapolation
NASA Astrophysics Data System (ADS)
Zhang, P.; Chen, Q.; Li, Z.; Tang, Z.; Liu, J.; Zhao, L.
2013-08-01
In this paper, we present an extrapolation approach, which uses minimum weighted norm constraint and minimum variance spectrum estimation, for improving synthetic aperture radar (SAR) resolution. Minimum variance method is a robust high resolution method to estimate spectrum. Based on the theory of SAR imaging, the signal model of SAR imagery is analyzed to be feasible for using data extrapolation methods to improve the resolution of SAR image. The method is used to extrapolate the efficient bandwidth in phase history field and better results are obtained compared with adaptive weighted norm extrapolation (AWNE) method and traditional imaging method using simulated data and actual measured data.
NASA Astrophysics Data System (ADS)
Krieger, Ulrich K.; Siegrist, Franziska; Marcolli, Claudia; Emanuelsson, Eva U.; Gøbel, Freya M.; Bilde, Merete; Marsh, Aleksandra; Reid, Jonathan P.; Huisman, Andrew J.; Riipinen, Ilona; Hyttinen, Noora; Myllys, Nanna; Kurtén, Theo; Bannan, Thomas; Percival, Carl J.; Topping, David
2018-01-01
To predict atmospheric partitioning of organic compounds between gas and aerosol particle phase based on explicit models for gas phase chemistry, saturation vapor pressures of the compounds need to be estimated. Estimation methods based on functional group contributions require training sets of compounds with well-established saturation vapor pressures. However, vapor pressures of semivolatile and low-volatility organic molecules at atmospheric temperatures reported in the literature often differ by several orders of magnitude between measurement techniques. These discrepancies exceed the stated uncertainty of each technique which is generally reported to be smaller than a factor of 2. At present, there is no general reference technique for measuring saturation vapor pressures of atmospherically relevant compounds with low vapor pressures at atmospheric temperatures. To address this problem, we measured vapor pressures with different techniques over a wide temperature range for intercomparison and to establish a reliable training set. We determined saturation vapor pressures for the homologous series of polyethylene glycols (H - (O - CH2 - CH2)n - OH) for n = 3 to n = 8 ranging in vapor pressure at 298 K from 10-7 to 5×10-2 Pa and compare them with quantum chemistry calculations. Such a homologous series provides a reference set that covers several orders of magnitude in saturation vapor pressure, allowing a critical assessment of the lower limits of detection of vapor pressures for the different techniques as well as permitting the identification of potential sources of systematic error. Also, internal consistency within the series allows outlying data to be rejected more easily. Most of the measured vapor pressures agreed within the stated uncertainty range. Deviations mostly occurred for vapor pressure values approaching the lower detection limit of a technique. The good agreement between the measurement techniques (some of which are sensitive to the mass accommodation coefficient and some not) suggests that the mass accommodation coefficients of the studied compounds are close to unity. The quantum chemistry calculations were about 1 order of magnitude higher than the measurements. We find that extrapolation of vapor pressures from elevated to atmospheric temperatures is permissible over a range of about 100 K for these compounds, suggesting that measurements should be performed best at temperatures yielding the highest-accuracy data, allowing subsequent extrapolation to atmospheric temperatures.
Mining nonterrestrial resources: Information needs and research topics
NASA Technical Reports Server (NTRS)
Daemen, Jaak J. K.
1992-01-01
An outline of topics we need to understand better in order to apply mining technology to a nonterrestrial environment is presented. The proposed list is not intended to be complete. It aims to identify representative topics that suggest productive research. Such research will reduce the uncertainties associated with extrapolating from conventional earthbound practice to nonterrestrial applications. One objective is to propose projects that should put future discussions of nonterrestrial mining on a firmer, less speculative basis.
Inverting dedevelopment: geometric singularity theory in embryology
NASA Astrophysics Data System (ADS)
Bookstein, Fred L.; Smith, Bradley R.
2000-10-01
The diffeomorphism model so useful in the biomathematics of normal morphological variability and disease is inappropriate for applications in embryogenesis, where whole coordinate patches are created out of single points. For this application we need a suitable algebra for the creation of something from nothing in a carefully organized geometry: a formalism for parameterizing discrete nondifferentiabilities of invertible functions on Rk, k $GTR 1. One easy way to begin is via the inverse of the development map - call it the dedevelopment map, the deformation backwards in time. Extrapolated, this map will inevitably have singularities at which its derivative is zero. When the dedevelopment map is inverted to face forward in time, the singularities become appropriately isolated infinities of derivative. We have recently introduced growth visualizations via extrapolations to the isolated singularities at which only one directional derivative is zero. Maps inverse to these create new coordinate patches directionally rather than radically. The most generic singularity that suits this purpose is the crease f(x,y) equals (x,x2y+y3), which has already been applied in morphometrics for the description of focal morphogenetic phenomena. We apply it to embryogenesis in the form of its analytic inverse, and demonstrate its power using a priceless new data set of mouse embryos imaged in 3D by micro-MR with voxels smaller than 100micrometers 3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Samuel M., E-mail: samuel.greene@chem.ox.ac.uk; Shan, Xiao, E-mail: xiao.shan@chem.ox.ac.uk; Clary, David C., E-mail: david.clary@chem.ox.ac.u
Semiclassical Transition State Theory (SCTST), a method for calculating rate constants of chemical reactions, offers gains in computational efficiency relative to more accurate quantum scattering methods. In full-dimensional (FD) SCTST, reaction probabilities are calculated from third and fourth potential derivatives along all vibrational degrees of freedom. However, the computational cost of FD SCTST scales unfavorably with system size, which prohibits its application to larger systems. In this study, the accuracy and efficiency of 1-D SCTST, in which only third and fourth derivatives along the reaction mode are used, are investigated in comparison to those of FD SCTST. Potential derivatives aremore » obtained from numerical ab initio Hessian matrix calculations at the MP2/cc-pVTZ level of theory, and Richardson extrapolation is applied to improve the accuracy of these derivatives. Reaction barriers are calculated at the CCSD(T)/cc-pVTZ level. Results from FD SCTST agree with results from previous theoretical and experimental studies when Richardson extrapolation is applied. Results from our implementation of 1-D SCTST, which uses only 4 single-point MP2/cc-pVTZ energy calculations in addition to those for conventional TST, agree with FD results to within a factor of 5 at 250 K. This degree of agreement and the efficiency of the 1-D method suggest its potential as a means of approximating rate constants for systems too large for existing quantum scattering methods.« less
The Extrapolation of Elementary Sequences
NASA Technical Reports Server (NTRS)
Laird, Philip; Saul, Ronald
1992-01-01
We study sequence extrapolation as a stream-learning problem. Input examples are a stream of data elements of the same type (integers, strings, etc.), and the problem is to construct a hypothesis that both explains the observed sequence of examples and extrapolates the rest of the stream. A primary objective -- and one that distinguishes this work from previous extrapolation algorithms -- is that the same algorithm be able to extrapolate sequences over a variety of different types, including integers, strings, and trees. We define a generous family of constructive data types, and define as our learning bias a stream language called elementary stream descriptions. We then give an algorithm that extrapolates elementary descriptions over constructive datatypes and prove that it learns correctly. For freely-generated types, we prove a polynomial time bound on descriptions of bounded complexity. An especially interesting feature of this work is the ability to provide quantitative measures of confidence in competing hypotheses, using a Bayesian model of prediction.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
NASA Astrophysics Data System (ADS)
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
Pedotransfer functions in Earth system science: challenges and perspectives
NASA Astrophysics Data System (ADS)
Van Looy, K.; Minasny, B.; Nemes, A.; Verhoef, A.; Weihermueller, L.; Vereecken, H.
2017-12-01
We make a stronghold for a new generation of Pedotransfer functions (PTFs) that is currently developed in the different disciplines of Earth system science, offering strong perspectives for improvement of integrated process-based models, from local to global scale applications. PTFs are simple to complex knowledge rules that relate available soil information to soil properties and variables that are needed to parameterize soil processes. To meet the methodological challenges for a successful application in Earth system modeling, we highlight how PTF development needs to go hand in hand with suitable extrapolation and upscaling techniques such that the PTFs correctly capture the spatial heterogeneity of soils. Most actively pursued recent developments are related to parameterizations of solute transport, heat exchange, soil respiration and organic carbon content, root density and vegetation water uptake. We present an outlook and stepwise approach to the development of a comprehensive set of PTFs that can be applied throughout a wide range of disciplines of Earth system science, with emphasis on land surface models. Novel sensing techniques and soil information availability provide a true breakthrough for this, yet further improvements are necessary in three domains: 1) the determining of unknown relationships and dealing with uncertainty in Earth system modeling; 2) the step of spatially deploying this knowledge with PTF validation at regional to global scales; and 3) the integration and linking of the complex model parameterizations (coupled parameterization). Integration is an achievable goal we will show.
Regionalisation of parameters of a large-scale water quality model in Lithuania using PAIC-SWAT
NASA Astrophysics Data System (ADS)
Zarrineh, Nina; van Griensven, Ann; Sennikovs, Juris; Bekere, Liga; Plunge, Svajunas
2015-04-01
To comply with the EU Water Framework Directive, all water bodies need to achieve good ecological status. To reach these goals, the Environmental Protection Agency (AAA) has to elaborate river basin districts management plans and programmes of measures for all catchments in Lithuania. For this purpose, a Soil and Water Assessment Tool (SWAT) model was set up for all Lithuanian catchments using the most recent version of SWAT2012 rev627 implemented and imbedded in a Python workflow by the Center of Processes Analysis and Research (PAIC). The model was calibrated and evaluated using all monitoring data of river discharge, nitrogen and phosphorous concentrations and load. A regionalisation strategy has been set up by identifying 13 hydrological regions according to the runoff formation and hydrological conditions. In each region, a representative catchment was selected and calibrated using a combination of manual and automated calibration techniques. After final parameterization and fulfilling of calibrating and validating evaluation criteria, the same parameters sets have been extrapolated to other catchments within the same hydrological region. Multi variable cal/val strategy was implemented for the following variables: river flow and in-stream NO3, Total Nitrogen, PO4 and Total Phosphorous concentrations. The criteria used for calibration, validation and extrapolation are: Nash-Sutcliffe Efficiency (NSE) for flow and R-squared for water quality variables and PBIAS (percentage bias) for all variables. For the hydrological calibration, NSE values greater than 0.5 should be achieved, while for validation and extrapolation the threshold is respectively 0.4 and 0.3. PBIAS errors have to be less than 20% for calibration and for validation and extrapolation less than 25% and 30%, respectively. In water quality calibration, R-squared should be achieved to 0.5 for calibration and for validation and extrapolation to 0.4 and 0.3 respectively for nitrogen variables. Besides PBIAS error should be less than 40% for calibration, and less than 70% for validation and extrapolation for all mentioned water quality variables. For the flow calibration, daily discharge data for 62 stations were provided for the period 1997-2012. For more than 500 stations, water quality data was provided and 135 data-rich stations was pre-processed in a database containing all observations from 1997-2012. Finally by implementing this regionalisation strategy, the model could satisfactorily predict the selected variables so that in the hydrological part more than 90% of stations fulfilled the criteria and in the water quality part more than 95% of stations fulfilled the criteria. Keywords: Water Quality Modelling, Regionalisation, Parameterization, Nitrogen and Phosphorus Prediction, Calibration, PAIC-SWAT.
Molecular environmental geochemistry
NASA Astrophysics Data System (ADS)
O'Day, Peggy A.
1999-05-01
The chemistry, mobility, and bioavailability of contaminant species in the natural environment are controlled by reactions that occur in and among solid, aqueous, and gas phases. These reactions are varied and complex, involving changes in chemical form and mass transfer among inorganic, organic, and biochemical species. The field of molecular environmental geochemistry seeks to apply spectroscopic and microscopic probes to the mechanistic understanding of environmentally relevant chemical processes, particularly those involving contaminants and Earth materials. In general, empirical geochemical models have been shown to lack uniqueness and adequate predictive capability, even in relatively simple systems. Molecular geochemical tools, when coupled with macroscopic measurements, can provide the level of chemical detail required for the credible extrapolation of contaminant reactivity and bioavailability over ranges of temperature, pressure, and composition. This review focuses on recent advances in the understanding of molecular chemistry and reaction mechanisms at mineral surfaces and mineral-fluid interfaces spurred by the application of new spectroscopies and microscopies. These methods, such as synchrotron X-ray absorption and scattering techniques, vibrational and resonance spectroscopies, and scanning probe microscopies, provide direct chemical information that can elucidate molecular mechanisms, including element speciation, ligand coordination and oxidation state, structural arrangement and crystallinity on different scales, and physical morphology and topography of surfaces. Nonvacuum techniques that allow examination of reactions in situ (i.e., with water or fluids present) and in real time provide direct links between molecular structure and reactivity and measurements of kinetic rates or thermodynamic properties. Applications of these diverse probes to laboratory model systems have provided fundamental insight into inorganic and organic reactions at mineral surfaces and mineral-water interfaces. A review of recent studies employing molecular characterizations of soils, sediments, and biological samples from contaminated sites exemplifies the utility and benefits, as well as the challenge, of applying molecular probes to complicated natural materials. New techniques, technological advances, and the crossover of methods from other disciplines such as biochemistry and materials science promise better examination of environmental chemical processes in real time and at higher resolution, and will further the integration of molecular information into field-scale chemical and hydrologic models.
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Life-Cycle Cost/Benefit Assessment of Expedite Departure Path (EDP)
NASA Technical Reports Server (NTRS)
Wang, Jianzhong Jay; Chang, Paul; Datta, Koushik
2005-01-01
This report presents a life-cycle cost/benefit assessment (LCCBA) of Expedite Departure Path (EDP), an air traffic control Decision Support Tool (DST) currently under development at NASA. This assessment is an update of a previous study performed by bd Systems, Inc. (bd) during FY01, with the following revisions: The life-cycle cost assessment methodology developed by bd for the previous study was refined and calibrated using Free Flight Phase 1 (FFP1) cost information for Traffic Management Advisor (TMA, or TMA-SC in the FAA's terminology). Adjustments were also made to the site selection and deployment scheduling methodology to include airspace complexity as a factor. This technique was also applied to the benefit extrapolation methodology to better estimate potential benefits for other years, and at other sites. This study employed a new benefit estimating methodology because bd s previous single year potential benefit assessment of EDP used unrealistic assumptions that resulted in optimistic estimates. This methodology uses an air traffic simulation approach to reasonably predict the impacts from the implementation of EDP. The results of the costs and benefits analyses were then integrated into a life-cycle cost/benefit assessment.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Technical Reports Server (NTRS)
Choi, K.-Y.; Dulikravich, G. S.
1993-01-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Reliability enhancement of Navier-Stokes codes through convergence enhancement
NASA Astrophysics Data System (ADS)
Choi, K.-Y.; Dulikravich, G. S.
1993-11-01
Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.
Estimation of shelf life of natural rubber latex exam-gloves based on creep behavior.
Das, Srilekha Sarkar; Schroeder, Leroy W
2008-05-01
Samples of full-length glove-fingers cut from chlorinated and nonchlorinated latex medical examination gloves were aged for various times at several fixed temperatures and 25% relative humidity. Creep testing was performed using an applied stress of 50 kPa on rectangular specimens (10 mm x 8 mm) of aged and unaged glove fingers as an assessment of glove loosening during usage. Variations in creep curves obtained were compared to determine the threshold aging time when the amount of creep became larger than the initial value. These times were then used in various models to estimate shelf lives at lower temperatures. Several different methods of extrapolation were used for shelf-life estimation and comparison. Neither Q-factor nor Arrhenius activation energies, as calculated from 10 degrees C interval shift factors, were constant over the temperature range; in fact, both decreased at lower temperatures. Values of Q-factor and activation energies predicted up to 5 years of shelf life. Predictions are more sensitive to values of activation energy as the storage temperature departs from the experimental aging data. Averaging techniques for prediction of average activation energy predicted the longest shelf life as the curvature is reduced. Copyright 2007 Wiley Periodicals, Inc.
Vulnerability of CMOS image sensors in Megajoule Class Laser harsh environment.
Goiffon, V; Girard, S; Chabane, A; Paillet, P; Magnan, P; Cervantes, P; Martin-Gonthier, P; Baggio, J; Estribeau, M; Bourgade, J-L; Darbon, S; Rousseau, A; Glebov, V Yu; Pien, G; Sangster, T C
2012-08-27
CMOS image sensors (CIS) are promising candidates as part of optical imagers for the plasma diagnostics devoted to the study of fusion by inertial confinement. However, the harsh radiative environment of Megajoule Class Lasers threatens the performances of these optical sensors. In this paper, the vulnerability of CIS to the transient and mixed pulsed radiation environment associated with such facilities is investigated during an experiment at the OMEGA facility at the Laboratory for Laser Energetics (LLE), Rochester, NY, USA. The transient and permanent effects of the 14 MeV neutron pulse on CIS are presented. The behavior of the tested CIS shows that active pixel sensors (APS) exhibit a better hardness to this harsh environment than a CCD. A first order extrapolation of the reported results to the higher level of radiation expected for Megajoule Class Laser facilities (Laser Megajoule in France or National Ignition Facility in the USA) shows that temporarily saturated pixels due to transient neutron-induced single event effects will be the major issue for the development of radiation-tolerant plasma diagnostic instruments whereas the permanent degradation of the CIS related to displacement damage or total ionizing dose effects could be reduced by applying well known mitigation techniques.
Compressive sensing scalp EEG signals: implementations and practical performance.
Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther
2012-11-01
Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.
SIG-VISA: Signal-based Vertically Integrated Seismic Monitoring
NASA Astrophysics Data System (ADS)
Moore, D.; Mayeda, K. M.; Myers, S. C.; Russell, S.
2013-12-01
Traditional seismic monitoring systems rely on discrete detections produced by station processing software; however, while such detections may constitute a useful summary of station activity, they discard large amounts of information present in the original recorded signal. We present SIG-VISA (Signal-based Vertically Integrated Seismic Analysis), a system for seismic monitoring through Bayesian inference on seismic signals. By directly modeling the recorded signal, our approach incorporates additional information unavailable to detection-based methods, enabling higher sensitivity and more accurate localization using techniques such as waveform matching. SIG-VISA's Bayesian forward model of seismic signal envelopes includes physically-derived models of travel times and source characteristics as well as Gaussian process (kriging) statistical models of signal properties that combine interpolation of historical data with extrapolation of learned physical trends. Applying Bayesian inference, we evaluate the model on earthquakes as well as the 2009 DPRK test event, demonstrating a waveform matching effect as part of the probabilistic inference, along with results on event localization and sensitivity. In particular, we demonstrate increased sensitivity from signal-based modeling, in which the SIGVISA signal model finds statistical evidence for arrivals even at stations for which the IMS station processing failed to register any detection.
NASA Astrophysics Data System (ADS)
Bergese, P.; Bontempi, E.; Depero, L. E.
2006-10-01
X-ray reflectivity (XRR) is a non-destructive, accurate and fast technique for evaluating film density. Indeed, sample-goniometer alignment is a critical experimental factor and the overriding error source in XRR density determination. With commercial single-wavelength X-ray reflectometers, alignment is difficult to control and strongly depends on the operator. In the present work, the contribution of misalignment on density evaluation error is discussed, and a novel procedure (named XRR-density evaluation or XRR-DE method) to minimize the problem will be presented. The method allows to overcome the alignment step through the extrapolation of the correct density value from appropriate non-specular XRR data sets. This procedure is operator independent and suitable for commercial single-wavelength X-ray reflectometers. To test the XRR-DE method, single crystals of TiO 2 and SrTiO 3 were used. In both cases the determined densities differed from the nominal ones less than 5.5%. Thus, the XRR-DE method can be successfully applied to evaluate the density of thin films for which only optical reflectivity is today used. The advantage is that this method can be considered thickness independent.
On long-term periodicities in the sunspot record
NASA Technical Reports Server (NTRS)
Wilson, R. M.
1984-01-01
Sunspot records are systematically maintained, with the knowledge that an 11 year average period exists since about 1850. Thus, the sunspot record of highest quality and considered to be the most reliable is that of cycle eight through the present. On the basis of cycles 8 through 20, various combinations of sine curves were used to approximate the observed R sub MAX values (where R sub MAX is the smoothed sunspot number at cycle maximum). It is found that a three component sinusoidal function, having an 11 cycle and a 2 cycle variation on a 90 cycle periodicity, yields computed R sub MAX values which fit, reasonably well, observed R sub MAX values for the modern sunspot cycles. Extrapolation of the empirical functions forward in time allows for the projection of values of R sub MAX for cycles 21 and 22. For cycle 21, the function projects a value of 157.3, very close to the actually observed value of 164.5. For cycle 22, the function projects a value of about 107. Linear regressions applied to cycle 22 indicate a long-period cycle (cycle duration 132 months). An extensive bibliography on techniques used to estimate the time dependent behavior of sunspot cycles is provided.
A finite volume model simulation for the Broughton Archipelago, Canada
NASA Astrophysics Data System (ADS)
Foreman, M. G. G.; Czajko, P.; Stucchi, D. J.; Guo, M.
A finite volume circulation model is applied to the Broughton Archipelago region of British Columbia, Canada and used to simulate the three-dimensional velocity, temperature, and salinity fields that are required by a companion model for sea lice behaviour, development, and transport. The absence of a high resolution atmospheric model necessitated the installation of nine weather stations throughout the region and the development of a simple data assimilation technique that accounts for topographic steering in interpolating/extrapolating the measured winds to the entire model domain. The circulation model is run for the period of March 13-April 3, 2008 and correlation coefficients between observed and model currents, comparisons between model and observed tidal harmonics, and root mean square differences between observed and model temperatures and salinities all showed generally good agreement. The importance of wind forcing in the near-surface circulation, differences between this simulation and one computed with another model, the effects of bathymetric smoothing on channel velocities, further improvements necessary for this model to accurately simulate conditions in May and June, and the implication of near-surface current patterns at a critical location in the 'migration corridor' of wild juvenile salmon, are also discussed.
Achieving Translationally Invariant Trapped Ion Rings
NASA Astrophysics Data System (ADS)
Urban, Erik; Li, Hao-Kun; Noel, Crystal; Hemmerling, Boerge; Zhang, Xiang; Haeffner, Hartmut
2017-04-01
We present the design and implementation of a novel surface ion trap design in a ring configuration. By eliminating the need for wire bonds through the use of electrical vias and using a rotationally invariant electrode configuration, we have realized a trap that is able to trap up to 20 ions in a ring geometry 45um in diameter, 400um above the trap surface. This large trapping height to ring diameter ratio allows for global addressing of the ring with both lasers and electric fields in the chamber, thereby increasing our ability to control the ring as a whole. Applying compensating electric fields, we measure very low tangential trap frequencies (less than 20kHz) corresponding to rotational barriers down to 4mK. This measurement is currently limited by the temperature of the ions but extrapolation indicates the barrier can be reduced much further with more advanced cooling techniques. Finally, we show that we are able to reduce this energy barrier sufficiently such that the ions are able to overcome it either through thermal motion or rotational motion and delocalize over the full extent of the ring. This work was funded by the Keck Foundation and the NSF.
Narrowband signal detection in the SETI field test
NASA Technical Reports Server (NTRS)
Cullers, D. Kent; Deans, Stanley R.
1986-01-01
Various methods for detecting narrow-band signals are evaluated. The characteristics of synchronized and unsynchronized pulses are examined. Synchronous, square law, regular pulse, and the general form detections are discussed. The CW, single pulse, synchronous, and four pulse detections are analyzed in terms of false alarm rate and threshold relative to average noise power. Techniques for saving memory and retaining sensitivity are described. Consideration is given to nondrifting CW detection, asynchronous pulse detection, interpolative and extrapolative pulse detectors, and finite and infinite pulses.
A neural network for the prediction of performance parameters of transformer cores
NASA Astrophysics Data System (ADS)
Nussbaum, C.; Booth, T.; Ilo, A.; Pfützner, H.
1996-07-01
The paper shows that Artificial Neural Networks (ANNs) may offer new possibilities for the prediction of transformer core performance parameters, i.e. no-load power losses and excitation. Basically this technique enables simulations with respect to different construction parameters most notably the characteristics of corner designs, i.e. the overlap length, the air gap length, and the number of steps. However, without additional physical knowledge incorporated into the ANN extrapolation beyond the training data limits restricts the predictive performance.
Interpretation of ERTS-MSS images of a Savanna area in eastern Colombia
NASA Technical Reports Server (NTRS)
Elberson, G. W. W.
1973-01-01
The application of ERTS-1 imagery for extrapolating existing soil maps into unmapped areas of the Llanos Orientales of Colombia, South America is discussed. Interpretations of ERTS-1 data were made according to conventional photointerpretation techniques. Most units delineated in the existing reconnaissance soil map at a scale of 1:250,000 could be recognized and delineated in the ERTS image. The methods of interpretation are described and the results obtained for specific areas are analyzed.
NASA Astrophysics Data System (ADS)
Khoshkholgh, Mehri Javan; Marsusi, Farah; Abolhassani, Mohammad Reza
2015-02-01
PTBs polymers with thieno[3,4-b]thiophene [TT] and benzodithiophene [BDT] units have particular properties, which demonstrate it as one of the best group of donor materials in organic solar cells. In the present work, density functional theory (DFT) is applied to investigate the optimized structure, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), band gap and dihedral angle of PTB7 at B3LYP/6-31G(d). Two different approaches are applied to carry out these investigations: Oligomer extrapolation technique and periodic boundary condition (PBC) method. The results obtained from PBC-DFT method are in fair agreement with experiments. Based on these reliable outcomes; the investigations continued to perform some derivatives of PTB7. In this study, sulfur is substituted by nitrogen, oxygen, silicon, phosphor or selenium atoms in pristine PTB7. Due to the shift of HOMO and LUMO levels, smaller band gaps are predicted to appear in some derivatives in comparison with PTB7. Maximum theoretical efficiencies, η, of the mentioned derivatives as well as local difference of dipole moments between the ground and excited states (Δμge) are computed. The results indicate that substitution of sulfur by nitrogen or oxygen in BDT unit, and silicon or phosphor in TT unit of pristine PTB7 leads to a higher η as well as Δμge.
Radiative Transitions in Charmonium from Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jozef Dudek; Robert Edwards; David Richards
2006-01-17
Radiative transitions between charmonium states offer an insight into the internal structure of heavy-quark bound states within QCD. We compute, for the first time within lattice QCD, the transition form-factors of various multipolarities between the lightest few charmonium states. In addition, we compute the experimentally unobservable, but physically interesting vector form-factors of the {eta}{sub c}, J/{psi} and {chi}{sub c0}. To this end we apply an ambitious combination of lattice techniques, computing three-point functions with heavy domain wall fermions on an anisotropic lattice within the quenched approximation. With an anisotropy {xi} = 3 at a{sub s} {approx} 0.1 fm we findmore » a reasonable gross spectrum and a hyperfine splitting {approx}90 MeV, which compares favorably with other improved actions. In general, after extrapolation of lattice data at non-zero Q{sup 2} to the photopoint, our results agree within errors with all well measured experimental values. Furthermore, results are compared with the expectations of simple quark models where we find that many features are in agreement; beyond this we propose the possibility of constraining such models using our extracted values of physically unobservable quantities such as the J/{psi} quadrupole moment. We conclude that our methods are successful and propose to apply them to the problem of radiative transitions involving hybrid mesons, with the eventual goal of predicting hybrid meson photoproduction rates at the GlueX experiment.« less
NASA Astrophysics Data System (ADS)
Rana, Narender; Zhang, Yunlin; Wall, Donald; Dirahoui, Bachir; Bailey, Todd C.
2015-03-01
Integrate circuit (IC) technology is going through multiple changes in terms of patterning techniques (multiple patterning, EUV and DSA), device architectures (FinFET, nanowire, graphene) and patterning scale (few nanometers). These changes require tight controls on processes and measurements to achieve the required device performance, and challenge the metrology and process control in terms of capability and quality. Multivariate data with complex nonlinear trends and correlations generally cannot be described well by mathematical or parametric models but can be relatively easily learned by computing machines and used to predict or extrapolate. This paper introduces the predictive metrology approach which has been applied to three different applications. Machine learning and predictive analytics have been leveraged to accurately predict dimensions of EUV resist patterns down to 18 nm half pitch leveraging resist shrinkage patterns. These patterns could not be directly and accurately measured due to metrology tool limitations. Machine learning has also been applied to predict the electrical performance early in the process pipeline for deep trench capacitance and metal line resistance. As the wafer goes through various processes its associated cost multiplies. It may take days to weeks to get the electrical performance readout. Predicting the electrical performance early on can be very valuable in enabling timely actionable decision such as rework, scrap, feedforward, feedback predicted information or information derived from prediction to improve or monitor processes. This paper provides a general overview of machine learning and advanced analytics application in the advanced semiconductor development and manufacturing.
Khoshkholgh, Mehri Javan; Marsusi, Farah; Abolhassani, Mohammad Reza
2015-02-05
PTBs polymers with thieno[3,4-b]thiophene [TT] and benzodithiophene [BDT] units have particular properties, which demonstrate it as one of the best group of donor materials in organic solar cells. In the present work, density functional theory (DFT) is applied to investigate the optimized structure, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), band gap and dihedral angle of PTB7 at B3LYP/6-31G(d). Two different approaches are applied to carry out these investigations: Oligomer extrapolation technique and periodic boundary condition (PBC) method. The results obtained from PBC-DFT method are in fair agreement with experiments. Based on these reliable outcomes; the investigations continued to perform some derivatives of PTB7. In this study, sulfur is substituted by nitrogen, oxygen, silicon, phosphor or selenium atoms in pristine PTB7. Due to the shift of HOMO and LUMO levels, smaller band gaps are predicted to appear in some derivatives in comparison with PTB7. Maximum theoretical efficiencies, η, of the mentioned derivatives as well as local difference of dipole moments between the ground and excited states (Δμge) are computed. The results indicate that substitution of sulfur by nitrogen or oxygen in BDT unit, and silicon or phosphor in TT unit of pristine PTB7 leads to a higher η as well as Δμge. Copyright © 2014 Elsevier B.V. All rights reserved.
Alternative approaches to predicting methane emissions from dairy cows.
Mills, J A N; Kebreab, E; Yates, C M; Crompton, L A; Cammell, S B; Dhanoa, M S; Agnew, R E; France, J
2003-12-01
Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited value where predictions are obtained for nutrient intakes and diet types outside those used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three nonlinear alternatives that were all of modified Mitscherlich (monomolecular) form. Of the linear models tested, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.
AXES OF EXTRAPOLATION IN RISK ASSESSMENTS
Extrapolation in risk assessment involves the use of data and information to estimate or predict something that has not been measured or observed. Reasons for extrapolation include that the number of combinations of environmental stressors and possible receptors is too large to c...
CROSS-SPECIES DOSE EXTRAPOLATION FOR DIESEL EMISSIONS
Models for cross-species (rat to human) dose extrapolation of diesel emission were evaluated for purposes of establishing guidelines for human exposure to diesel emissions (DE) based on DE toxicological data obtained in rats. Ideally, a model for this extrapolation would provide...
Extrapolation procedures in Mott electron polarimetry
NASA Technical Reports Server (NTRS)
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Cross-species extrapolation of chemical effects: Challenges and new insights
One of the greatest uncertainties in chemical risk assessment is extrapolation of effects from tested to untested species. While this undoubtedly is a challenge in the human health arena, species extrapolation is a particularly daunting task in ecological assessments, where it is...
Latychevskaia, T; Chushkin, Y; Fink, H-W
2016-10-01
In coherent diffractive imaging, the resolution of the reconstructed object is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by postextrapolation of coherent diffraction images, such as diffraction patterns or holograms. We demonstrate that a diffraction pattern can unambiguously be extrapolated from only a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal is linearly proportional to the oversampling ratio. Although there could be in principle other methods to achieve extrapolation, we devote our discussion to employing iterative phase retrieval methods and demonstrate their limits. We present two numerical studies; namely, the extrapolation of diffraction patterns of nonbinary and that of phase objects together with a discussion of the optimal extrapolation procedure. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Equation of state for 1,2-dichloroethane based on a hybrid data set
NASA Astrophysics Data System (ADS)
Thol, Monika; Rutkai, Gábor; Köster, Andreas; Miroshnichenko, Svetlana; Wagner, Wolfgang; Vrabec, Jadran; Span, Roland
2017-06-01
A fundamental equation of state in terms of the Helmholtz energy is presented for 1,2-dichloroethane. Due to a narrow experimental database, not only laboratory measurements but also molecular simulation data are applied to the fitting procedure. The present equation of state is valid from the triple point up to 560 K for pressures of up to 100 MPa. The accuracy of the equation is assessed in detail. Furthermore, a reasonable extrapolation behaviour is verified.
NASA Technical Reports Server (NTRS)
Cousineau, R. D.; Crook, R., Jr.; Leeds, D. J.
1985-01-01
This report discusses a geological and seismological investigation of the NASA Ames-Dryden Flight Research Facility site at Edwards, California. Results are presented as seismic design criteria, with design values of the pertinent ground motion parameters, probability of recurrence, and recommended analogous time-history accelerograms with their corresponding spectra. The recommendations apply specifically to the Dryden site and should not be extrapolated to other sites with varying foundation and geologic conditions or different seismic environments.
Microeconomics of 300-mm process module control
NASA Astrophysics Data System (ADS)
Monahan, Kevin M.; Chatterjee, Arun K.; Falessi, Georges; Levy, Ady; Stoller, Meryl D.
2001-08-01
Simple microeconomic models that directly link metrology, yield, and profitability are rare or non-existent. In this work, we validate and apply such a model. Using a small number of input parameters, we explain current yield management practices in 200 mm factories. The model is then used to extrapolate requirements for 300 mm factories, including the impact of simultaneous technology transitions to 130nm lithography and integrated metrology. To support our conclusions, we use examples relevant to factory-wide photo module control.
Verrest, Luka; Dorlo, Thomas P C
2017-06-01
Neglected tropical diseases (NTDs) affect more than one billion people, mainly living in developing countries. For most of these NTDs, treatment is suboptimal. To optimize treatment regimens, clinical pharmacokinetic studies are required where they have not been previously conducted to enable the use of pharmacometric modeling and simulation techniques in their application, which can provide substantial advantages. Our aim was to provide a systematic overview and summary of all clinical pharmacokinetic studies in NTDs and to assess the use of pharmacometrics in these studies, as well as to identify which of the NTDs or which treatments have not been sufficiently studied. PubMed was systematically searched for all clinical trials and case reports until the end of 2015 that described the pharmacokinetics of a drug in the context of treating any of the NTDs in patients or healthy volunteers. Eighty-two pharmacokinetic studies were identified. Most studies included small patient numbers (only five studies included >50 subjects) and only nine (11 %) studies included pediatric patients. A large part of the studies was not very recent; 56 % of studies were published before 2000. Most studies applied non-compartmental analysis methods for pharmacokinetic analysis (62 %). Twelve studies used population-based compartmental analysis (15 %) and eight (10 %) additionally performed simulations or extrapolation. For ten out of the 17 NTDs, none or only very few pharmacokinetic studies could be identified. For most NTDs, adequate pharmacokinetic studies are lacking and population-based modeling and simulation techniques have not generally been applied. Pharmacokinetic clinical trials that enable population pharmacokinetic modeling are needed to make better use of the available data. Simulation-based studies should be employed to enable the design of improved dosing regimens and more optimally use the limited resources to effectively provide therapy in this neglected area.
CUDA GPU based full-Stokes finite difference modelling of glaciers
NASA Astrophysics Data System (ADS)
Brædstrup, C. F.; Egholm, D. L.
2012-04-01
Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-01
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speed-upsmore » that increase as the system gets larger. Moreover, we find that AAJ significantly outperforms the Generalized Minimal Residual (GMRES) method in the range of problems considered here, with the relative performance again improving with size of the system. As a result, the proposed method represents a simple yet efficient technique that is particularly attractive for large-scale parallel solutions of linear systems of equations.« less
Simulating Electron Cyclotron Maser Emission for Low Mass Stars
NASA Astrophysics Data System (ADS)
Llama, Joe; Jardine, Moira
2018-01-01
Zeeman-Doppler Imaging (ZDI) is a powerful technique that enables us to map the large-scale magnetic fields of stars spanning the pre- and main-sequence. Coupling these magnetic maps with field extrapolation methods allow us to investigate the topology of the closed, X-ray bright corona, and the cooler, open stellar wind.Using ZDI maps of young M dwarfs with simultaneous radio light curves obtained from the VLA, we present the results of modeling the Electron-Cyclotron Maser (ECM) emission from these systems. We determine the X-ray luminosity and ECM emission that is produced using the ZDI maps and our field extrapolation model. We compare these findings with the observed radio light curves of these stars. This allows us to predict the relative phasing and amplitude of the stellar X-ray and radio light curves.This benchmarking of our model using these systems allows us to predict the ECM emission for all stars that have a ZDI map and an observed X-ray luminosity. Our model allows us to understand the origin of transient radio emission observations and is crucial for disentangling stellar and exoplanetary radio signals.
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
NASA Astrophysics Data System (ADS)
Wang, Gaili; Yang, Ji; Wang, Dan; Liu, Liping
2016-11-01
Extrapolation techniques and storm-scale Numerical Weather Prediction (NWP) models are two primary approaches for short-term precipitation forecasts. The primary objective of this study is to verify precipitation forecasts and compare the performances of two nowcasting schemes: a Beijing Auto-Nowcast system (BJ-ANC) based on extrapolation techniques and a storm-scale NWP model called the Advanced Regional Prediction System (ARPS). The verification and comparison takes into account six heavy precipitation events that occurred in the summer of 2014 and 2015 in Jiangsu, China. The forecast performances of the two schemes were evaluated for the next 6 h at 1-h intervals using gridpoint-based measures of critical success index, bias, index of agreement, root mean square error, and using an object-based verification method called Structure-Amplitude-Location (SAL) score. Regarding gridpoint-based measures, BJ-ANC outperforms ARPS at first, but then the forecast accuracy decreases rapidly with lead time and performs worse than ARPS after 4-5 h of the initial forecast. Regarding the object-based verification method, most forecasts produced by BJ-ANC focus on the center of the diagram at the 1-h lead time and indicate high-quality forecasts. As the lead time increases, BJ-ANC overestimates precipitation amount and produces widespread precipitation, especially at a 6-h lead time. The ARPS model overestimates precipitation at all lead times, particularly at first.
NASA Technical Reports Server (NTRS)
Mack, Robert J.; Kuhn, Neil S.
2006-01-01
A study was performed to determine a limiting separation distance for the extrapolation of pressure signatures from cruise altitude to the ground. The study was performed at two wind-tunnel facilities with two research low-boom wind-tunnel models designed to generate ground pressure signatures with "flattop" shapes. Data acquired at the first wind-tunnel facility showed that pressure signatures had not achieved the desired low-boom features for extrapolation purposes at separation distances of 2 to 5 span lengths. However, data acquired at the second wind-tunnel facility at separation distances of 5 to 20 span lengths indicated the "limiting extrapolation distance" had been achieved so pressure signatures could be extrapolated with existing codes to obtain credible predictions of ground overpressures.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Link-prediction to tackle the boundary specification problem in social network surveys
De Wilde, Philippe; Buarque de Lima-Neto, Fernando
2017-01-01
Diffusion processes in social networks often cause the emergence of global phenomena from individual behavior within a society. The study of those global phenomena and the simulation of those diffusion processes frequently require a good model of the global network. However, survey data and data from online sources are often restricted to single social groups or features, such as age groups, single schools, companies, or interest groups. Hence, a modeling approach is required that extrapolates the locally restricted data to a global network model. We tackle this Missing Data Problem using Link-Prediction techniques from social network research, network generation techniques from the area of Social Simulation, as well as a combination of both. We found that techniques employing less information may be more adequate to solve this problem, especially when data granularity is an issue. We validated the network models created with our techniques on a number of real-world networks, investigating degree distributions as well as the likelihood of links given the geographical distance between two nodes. PMID:28426826
NASA Astrophysics Data System (ADS)
Setar, Katherine Marie
1997-08-01
This dissertation analytically and critically examines composer Pauline Oliveros's philosophy of 'listening' as it applies to selected works created between 1961 and 1984. The dissertation is organized through the application of two criteria: three perspectives of listening (empirical, phenomenal, and, to a lesser extent, personal), and categories derived, in part, from her writings and interviews (improvisational, traditional, theatrical, electronic, meditational, and interactive). In general, Oliveros's works may be categorized by one of two listening perspectives. The 'empirical' listening perspective, which generally includes pure acoustic phenomenon, independent from human interpretation, is exemplified in the analyses of Sound Patterns (1961), OH HA AH (1968), and, to a lesser extent, I of IV (1966). The 'phenomenal' listening perspective, which involves the human interaction with the pure acoustic phenomenon, includes a critical examination of her post-1971 'meditation' pieces and an analytical and critical examination of her tonal 'interactive' improvisations in highly resonant space, such as Watertank Software (1984). The most pervasive element of Oliveros's stylistic evolution is her gradual change from the hierarchical aesthetic of the traditional composer, to one in which creative control is more equally shared by all participants. Other significant contributions by Oliveros include the probable invention of the 'meditation' genre, an emphasis on the subjective perceptions of musical participants as a means to greater musical awareness, her musical exploration of highly resonant space, and her pioneering work in American electronic music. Both analytical and critical commentary were applied to selective representative works from Oliveros's six compositional categories. The analytical methods applied to the Oliveros's works include Wayne Slawson's vowel/formant theory as described in his book, Sound Color, an original method of categorizing consonants as noise sources based upon the principles of the International Phonetic Association, traditional morphological analyses, linear-extrapolation analyses which are derived from Schenker's theory, and discussions of acoustic phenomena as they apply to such practices as 1960s electronic studio techniques and the dynamics of room acoustics.
Introduction of risk size in the determination of uncertainty factor UFL in risk assessment
NASA Astrophysics Data System (ADS)
Xue, Jinling; Lu, Yun; Velasquez, Natalia; Yu, Ruozhen; Hu, Hongying; Liu, Zhengtao; Meng, Wei
2012-09-01
The methodology for using uncertainty factors in health risk assessment has been developed for several decades. A default value is usually applied for the uncertainty factor UFL, which is used to extrapolate from LOAEL (lowest observed adverse effect level) to NAEL (no adverse effect level). Here, we have developed a new method that establishes a linear relationship between UFL and the additional risk level at LOAEL based on the dose-response information, which represents a very important factor that should be carefully considered. This linear formula makes it possible to select UFL properly in the additional risk range from 5.3% to 16.2%. Also the results remind us that the default value 10 may not be conservative enough when the additional risk level at LOAEL exceeds 16.2%. Furthermore, this novel method not only provides a flexible UFL instead of the traditional default value, but also can ensure a conservative estimation of the UFL with fewer errors, and avoid the benchmark response selection involved in the benchmark dose method. These advantages can improve the estimation of the extrapolation starting point in the risk assessment.
A Rat Body Phantom for Radiation Analysis
NASA Technical Reports Server (NTRS)
Qualls, Garry D.; Clowdsley, Martha S.; Slaba, Tony C.; Walker, Steven A.
2010-01-01
To reduce the uncertainties associated with estimating the biological effects of ionizing radiation in tissue, researchers rely on laboratory experiments in which mono-energetic, single specie beams are applied to cell cultures, insects, and small animals. To estimate the radiation effects on astronauts in deep space or low Earth orbit, who are exposed to mixed field broad spectrum radiation, these experimental results are extrapolated and combined with other data to produce radiation quality factors, radiation weighting factors, and other risk related quantities for humans. One way to reduce the uncertainty associated with such extrapolations is to utilize analysis tools that are applicable to both laboratory and space environments. The use of physical and computational body phantoms to predict radiation exposure and its effects is well established and a wide range of human and non-human phantoms are in use today. In this paper, a computational rat phantom is presented, as well as a description of the process through which that phantom has been coupled to existing radiation analysis tools. Sample results are presented for two space radiation environments.
Calculating LOAEL/NOAEL uncertainty factors for wildlife species in ecological risk assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suedel, B.C.; Clifford, P.A.; Ludwig, D.F.
1995-12-31
Terrestrial ecological risk assessments frequently require derivation of NOAELs or toxicity reference values (TRVS) against which to compare exposure estimates. However, much of the available information from the literature is LOAELS, not NOAELS. Lacking specific guidance, arbitrary factors of ten are sometimes employed for extrapolating NOAELs from LOAELs. In this study, the scientific literature was searched to obtain chronic and subchronic studies reporting NOAEL and LOAEL data for wildlife and laboratory species. Results to date indicate a mean conversion factor of 4.0 ({+-} 2.61 S.D.), with a minimum of 1. 6 and a maximum of 10 for 106 studies acrossmore » several classes of compounds (I.e., metals, pesticides, volatiles, etc.). These data suggest that an arbitrary factor of 10 conversion factor is unnecessarily restrictive for extrapolating NOAELs from LOAELs and that a factor of 4--5 would be more realistic for deriving toxicity reference values for wildlife species. Applying less arbitrary and more realistic conversion factors in ecological risk assessments will allow for a more accurate estimate of NOAEL values for assessing risk to wildlife populations.« less
A standardized nomenclature for craniofacial and facial anthropometry.
Caple, Jodi; Stephan, Carl N
2016-05-01
Standardized terms and methods have long been recognized as crucial to reduce measurement error and increase reliability in anthropometry. The successful prior use of craniometric landmarks makes extrapolation of these landmarks to the soft tissue context, as analogs, intuitive for forensic craniofacial analyses and facial photogrammetry. However, this extrapolation has not, so far, been systematic. Instead, varied nomenclature and definitions exist for facial landmarks, and photographic analyses are complicated by the generalization of 3D craniometric landmarks to the 2D face space where analogy is subsequently often lost, complicating anatomical assessments. For example, landmarks requiring palpation of the skull or the examination of the 3D surface typology are impossible to legitimately position; similar applies to median landmarks not visible in lateral photographs. To redress these issues without disposing of the craniometric framework that underpins many facial landmarks, we provide an updated and transparent nomenclature for facial description. This nomenclature maintains the original craniometric intent (and base abbreviations) but provides clear distinction of ill-defined (quasi) landmarks in photographic contexts, as produced when anatomical points are subjectively inferred from shape-from-shading information alone.
NASA Astrophysics Data System (ADS)
Moultos, Othonas A.; Zhang, Yong; Tsimpanogiannis, Ioannis N.; Economou, Ioannis G.; Maginn, Edward J.
2016-08-01
Molecular dynamics simulations were carried out to study the self-diffusion coefficients of CO2, methane, propane, n-hexane, n-hexadecane, and various poly(ethylene glycol) dimethyl ethers (glymes in short, CH3O-(CH2CH2O)n-CH3 with n = 1, 2, 3, and 4, labeled as G1, G2, G3, and G4, respectively) at different conditions. Various system sizes were examined. The widely used Yeh and Hummer [J. Phys. Chem. B 108, 15873 (2004)] correction for the prediction of diffusion coefficient at the thermodynamic limit was applied and shown to be accurate in all cases compared to extrapolated values at infinite system size. The magnitude of correction, in all cases examined, is significant, with the smallest systems examined giving for some cases a self-diffusion coefficient approximately 15% lower than the infinite system-size extrapolated value. The results suggest that finite size corrections to computed self-diffusivities must be used in order to obtain accurate results.
NASA Astrophysics Data System (ADS)
Li, Dong; Cheng, Tao; Zhou, Kai; Zheng, Hengbiao; Yao, Xia; Tian, Yongchao; Zhu, Yan; Cao, Weixing
2017-07-01
Red edge position (REP), defined as the wavelength of the inflexion point in the red edge region (680-760 nm) of the reflectance spectrum, has been widely used to estimate foliar chlorophyll content from reflectance spectra. A number of techniques have been developed for REP extraction in the past three decades, but most of them require data-specific parameterization and the consistence of their performance from leaf to canopy levels remains poorly understood. In this study, we propose a new technique (WREP) to extract REPs based on the application of continuous wavelet transform to reflectance spectra. The REP is determined by the zero-crossing wavelength in the red edge region of a wavelet transformed spectrum for a number of scales of wavelet decomposition. The new technique is simple to implement and requires no parameterization from the user as long as continuous wavelet transforms are applied to reflectance spectra. Its performance was evaluated for estimating leaf chlorophyll content (LCC) and canopy chlorophyll content (CCC) of cereal crops (i.e. rice and wheat) and compared with traditional techniques including linear interpolation, linear extrapolation, polynomial fitting and inverted Gaussian. Our results demonstrated that WREP obtained the best estimation accuracy for both LCC and CCC as compared to traditional techniques. High scales of wavelet decomposition were favorable for the estimation of CCC and low scales for the estimation of LCC. The difference in optimal scale reveals the underlying mechanism of signature transfer from leaf to canopy levels. In addition, crop-specific models were required for the estimation of CCC over the full range. However, a common model could be built with the REPs extracted with Scale 5 of the WREP technique for wheat and rice crops when CCC was less than 2 g/m2 (R2 = 0.73, RMSE = 0.26 g/m2). This insensitivity of WREP to crop type indicates the potential for aerial mapping of chlorophyll content between growth seasons of cereal crops. The new REP extraction technique provides us a new insight for understanding the spectral changes in the red edge region in response to chlorophyll variation from leaf to canopy levels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burgess-Herbert, Sarah L., E-mail: sarah.burgess@alum.mit.edu; Euling, Susan Y.
A critical challenge for environmental chemical risk assessment is the characterization and reduction of uncertainties introduced when extrapolating inferences from one species to another. The purpose of this article is to explore the challenges, opportunities, and research needs surrounding the issue of how genomics data and computational and systems level approaches can be applied to inform differences in response to environmental chemical exposure across species. We propose that the data, tools, and evolutionary framework of comparative genomics be adapted to inform interspecies differences in chemical mechanisms of action. We compare and contrast existing approaches, from disciplines as varied as evolutionarymore » biology, systems biology, mathematics, and computer science, that can be used, modified, and combined in new ways to discover and characterize interspecies differences in chemical mechanism of action which, in turn, can be explored for application to risk assessment. We consider how genetic, protein, pathway, and network information can be interrogated from an evolutionary biology perspective to effectively characterize variations in biological processes of toxicological relevance among organisms. We conclude that comparative genomics approaches show promise for characterizing interspecies differences in mechanisms of action, and further, for improving our understanding of the uncertainties inherent in extrapolating inferences across species in both ecological and human health risk assessment. To achieve long-term relevance and consistent use in environmental chemical risk assessment, improved bioinformatics tools, computational methods robust to data gaps, and quantitative approaches for conducting extrapolations across species are critically needed. Specific areas ripe for research to address these needs are recommended.« less
Lithospheric Strength and Stress State: Persistent Challenges and New Directions in Geodynamics
NASA Astrophysics Data System (ADS)
Hirth, G.
2017-12-01
The strength of the lithosphere controls a broad array of geodynamic processes ranging from earthquakes, the formation and evolution of plate boundaries and the thermal evolution of the planet. A combination of laboratory, geologic and geophysical observations provides several independent constraints on the rheological properties of the lithosphere. However, several persistent challenges remain in the interpretation of these data. Problems related to extrapolation in both scale and time (rate) need to be addressed to apply laboratory data. Nonetheless, good agreement between extrapolation of flow laws and the interpretation of microstructures in viscously deformed lithospheric mantle rocks demonstrates a strong foundation to build on to explore the role of scale. Furthermore, agreement between the depth distribution of earthquakes and predictions based on extrapolation of high temperature friction relationships provides a basis to understand links between brittle deformation and stress state. In contrast, problems remain for rationalizing larger scale geodynamic processes with these same rheological constraints. For example, at face value the lab derived values for the activation energy for creep are too large to explain convective instabilities at the base of the lithosphere, but too low to explain the persistence of dangling slabs in the upper mantle. In this presentation, I will outline these problems (and successes) and provide thoughts on where new progress can be made to resolve remaining inconsistencies, including discussion of the role of the distribution of volatiles and alteration on the strength of the lithosphere, new data on the influence of pressure on friction and fracture strength, and links between the location of earthquakes, thermal structure, and stress state.
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
NASA Astrophysics Data System (ADS)
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Greene, Samuel M; Shan, Xiao; Clary, David C
2016-06-28
Semiclassical Transition State Theory (SCTST), a method for calculating rate constants of chemical reactions, offers gains in computational efficiency relative to more accurate quantum scattering methods. In full-dimensional (FD) SCTST, reaction probabilities are calculated from third and fourth potential derivatives along all vibrational degrees of freedom. However, the computational cost of FD SCTST scales unfavorably with system size, which prohibits its application to larger systems. In this study, the accuracy and efficiency of 1-D SCTST, in which only third and fourth derivatives along the reaction mode are used, are investigated in comparison to those of FD SCTST. Potential derivatives are obtained from numerical ab initio Hessian matrix calculations at the MP2/cc-pVTZ level of theory, and Richardson extrapolation is applied to improve the accuracy of these derivatives. Reaction barriers are calculated at the CCSD(T)/cc-pVTZ level. Results from FD SCTST agree with results from previous theoretical and experimental studies when Richardson extrapolation is applied. Results from our implementation of 1-D SCTST, which uses only 4 single-point MP2/cc-pVTZ energy calculations in addition to those for conventional TST, agree with FD results to within a factor of 5 at 250 K. This degree of agreement and the efficiency of the 1-D method suggest its potential as a means of approximating rate constants for systems too large for existing quantum scattering methods.
Cosmogony as an extrapolation of magnetospheric research
NASA Technical Reports Server (NTRS)
Alfven, H.
1984-01-01
A theory of the origin and evolution of the Solar System which considered electromagnetic forces and plasma effects is revised in light of information supplied by space research. In situ measurements in the magnetospheres and solar wind can be extrapolated outwards in space, to interstellar clouds, and backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of cloud properties essential for the early phases in the formation of stars and solar nebulae. The latter extrapolation facilitates analysis of the cosmogonic processes by extrapolation of magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it is possible to reconstruct events 4 to 5 billion years ago with an accuracy of a few percent.
Measurement accuracies in band-limited extrapolation
NASA Technical Reports Server (NTRS)
Kritikos, H. N.
1982-01-01
The problem of numerical instability associated with extrapolation algorithms is addressed. An attempt is made to estimate the bounds for the acceptable errors and to place a ceiling on the measurement accuracy and computational accuracy needed for the extrapolation. It is shown that in band limited (or visible angle limited) extrapolation the larger effective aperture L' that can be realized from a finite aperture L by over sampling is a function of the accuracy of measurements. It is shown that for sampling in the interval L/b absolute value of xL, b1 the signal must be known within an error e sub N given by e sub N squared approximately = 1/4(2kL') cubed (e/8b L/L')(2kL') where L is the physical aperture, L' is the extrapolated aperture, and k = 2pi lambda.
Pion mass dependence of the HVP contribution to muon g - 2
NASA Astrophysics Data System (ADS)
Golterman, Maarten; Maltman, Kim; Peris, Santiago
2018-03-01
One of the systematic errors in some of the current lattice computations of the HVP contribution to the muon anomalous magnetic moment g - 2 is that associated with the extrapolation to the physical pion mass. We investigate this extrapolation assuming lattice pion masses in the range of 220 to 440 MeV with the help of two-loop chiral perturbation theory, and find that such an extrapolation is unlikely to lead to control of this systematic error at the 1% level. This remains true even if various proposed tricks to improve the chiral extrapolation are taken into account.
An extrapolation scheme for solid-state NMR chemical shift calculations
NASA Astrophysics Data System (ADS)
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
Thermodynamics of iron-aluminum alloys at 1573 K
NASA Technical Reports Server (NTRS)
Jacobson, Nathan S.; Mehrotra, Gopal M.
1993-01-01
The activities of iron and aluminum were measured in Fe-Al alloys at 1573 K, using the ion-current-ratio technique in a high-temperature Knudsen cell mass spectrometer. The Fe-Al solutions exhibited negative deviations from ideality over the entire composition range. The activity coefficients gamma(Fe), and gamma(Al) are given by six following equations as a function of mole fraction, X(Fe), X(Al). The results show good agreement with those obtained from previous investigations at other temperatures by extrapolation of the activity data to 1573 K.
A study of the Tyrone-Mount Union lineament by remote sensing techniques and field methods
NASA Technical Reports Server (NTRS)
Gold, D. P. (Principal Investigator)
1977-01-01
The author has identified the following significant results. This study has shown that subtle variations in fold axes, fold form, and stratigraphic thickness can be delineated. Many of the conclusions were based on extrapolation in similitude to different scales. A conceptual model was derived for the Tyrone-Mount Union lineament. In this model, the lineament the morphological expression of a zone of fracture concentrations which penetrated basement rocks and may have acted as a curtain to regional stresses or as a domain boundary between uncoupled adjacent crustal blocks.
Containment of composite fan blades
NASA Technical Reports Server (NTRS)
Stotler, C. L.; Coppa, A. P.
1979-01-01
A lightweight containment was developed for turbofan engine fan blades. Subscale ballistic-type tests were first run on a number of concepts. The most promising configuration was selected and further evaluated by larger scale tests in a rotating test rig. Weight savings made possible by the use of this new containment system were determined and extrapolated to a CF6-size engine. An analytical technique was also developed to predict the released blades motion when involved in the blade/casing interaction process. Initial checkout of this procedure was accomplished using several of the tests run during the program.
Application of a pulsed laser for measurements of bathymetry and algal fluorescence.
NASA Technical Reports Server (NTRS)
Hickman, G. D.; Hogg, J. E.; Friedman, E. J.; Ghovanlou, A. H.
1973-01-01
The technique of measuring water depths with an airborne pulsed dye laser is studied, with emphasis on the degrading effect of some environmental and operational parameters on the transmitted and reflected laser signals. Extrapolation of measurements of laser stimulated fluorescence, performed as a function of both the algal cell concentration and the distance between the algae and the laser/receiver, indicate that a laser system operating from a height of 500 m should be capable of detecting chlorophyll concentrations as low as 1.0 mg/cu m.-
NASA Technical Reports Server (NTRS)
Allen, C. S.; Jaeger, S. M.
1999-01-01
The goal of our efforts is to extrapolate nearfield jet noise measurements to the geometric far field where the jet noise sources appear to radiate from a single point. To accomplish this, information about the location of noise sources in the jet plume, the radiation patterns of the noise sources and the sound pressure level distribution of the radiated field must be obtained. Since source locations and radiation patterns can not be found with simple single microphone measurements, a more complicated method must be used.
A Neural Network Aero Design System for Advanced Turbo-Engines
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1999-01-01
An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a neural network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. The neural network technique works well not only as an interpolating device but also as an extrapolating device to achieve blade designs from a given database. Two validating test cases are discussed.
NASA Technical Reports Server (NTRS)
Potter, Christopher; Malhi, Yadvinder
2004-01-01
Ever more detailed representations of above-ground biomass and soil carbon pools have been developed during the LBA project. Environmental controls such as regional climate, land cover history, secondary forest regrowth, and soil fertility are now being taken into account in regional inventory studies. This paper will review the evolution of measurement-extrapolation approaches, remote sensing, and simulation modeling techniques for biomass and soil carbon pools, which together help constrain regional carbon budgets and enhance in our understanding of uncertainty at the regional level.
Speil, Sidney
1974-01-01
The problems of quantitating chrysotile in water by fiber count techniques are reviewed briefly and the use of mass quantitation is suggested as a preferable measure. Chrysotile fiber has been found in almost every sample of natural water examined, but generally transmission electron miscroscopy (TEM) is required because of the small diameters involved. The extreme extrapolation required in mathematically converting a few fibers or fiber fragments under the TEM to the fiber content of a liquid sample casts considerable doubt on the validity of numbers used to compare chrysotile contents of different liquids. PMID:4470930
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... is calculated from tumor data of the cancer bioassays using a statistical extrapolation procedure... carcinogenic concern currently set forth in Sec. 500.84 utilizes a statistical extrapolation procedure that... procedures did not rely on a statistical extrapolation of the data to a 1 in 1 million risk of cancer to test...
Cross, S E; Jiang, R; Benson, H A; Roberts, M S
2001-07-01
The effect of adding thickening agents on the penetration of a sunscreen benzophenone-3 through epidermal and a high-density polyethylene membrane was studied using both very thick (infinite dose) and thin (in use) applications. Contradictory results were obtained. Thickening agents retard skin penetration, in a manner consistent with a diffusional resistance in the formulation, when applied as an infinite dose. In contrast, when applied as in thin (in use) doses, thickening agents promote penetration, most likely through greater stratum corneum diffusivity arising from an enhanced hydration by the thicker formulations. The two key implications from this work are (i) a recognition of the danger in the potential extrapolation of infinite dosing to in use situations, and (ii) to recognize that thicker formulations may sometimes enhance the penetration of other topical agents when applied "in use".
Mueller, David S.
2013-01-01
profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers’ software.
Walczyk, Wiktoria; Schönherr, Holger
2013-01-15
To date, TM AFM (tapping mode or intermittent contact mode atomic force microscopy) is the most frequently applied direct imaging technique to visualize surface nanobubbles at the solid-aqueous interface. On one hand, AFM is the only profilometric technique that provides estimates of the bubbles' nanoscopic dimensions. On the other hand, the nanoscopic contact angles of surface nanobubbles estimated from their apparent dimensions that are deduced from AFM "height" images of nanobubbles differ markedly from the macrocopic water contact angles on the identical substrates. Here we show in detail how the apparent bubble height and width of surface nanobubbles on highly oriented pyrolytic graphite (HOPG) depend on the free amplitude of the cantilever oscillations and the amplitude setpoint ratio. (The role of these two AFM imaging parameters and their interdependence has not been studied so far for nanobubbles in a systematic way.) In all experiments, even with optimal scanning parameters, nanobubbles at the HOPG-water interface appeared to be smaller in the AFM images than their true size, which was estimated using a method presented herein. It was also observed that the severity of the underestimate increased with increasing bubble height and radius of curvature. The nanoscopic contact angle of >130° for nanobubbles on HOPG extrapolated to zero interaction force was only slightly overestimated and hence significantly higher than the macroscopic contact angle of water on HOPG (63 ± 2°). Thus, the widely reported contact angle discrepancy cannot be solely attributed to inappropriate AFM imaging conditions.
Using GNSS-R techniques to investigate the near sub-surface of Mars with the Deep Space Network
NASA Astrophysics Data System (ADS)
Elliott, H. M.; Bell, D. J.; Jin, C.; Decrossas, E.; Asmar, S.; Lazio, J.; Preston, R. A.; Ruf, C. S.; Renno, N. O.
2017-12-01
Global Navigation Satellite Systems Reflectometry (GNSS-R) has shown that passive measurements using separate active sources can infer the soil moisture, snow pack depth and other quantities of scientific interest. Here, we expand upon this method and propose that a passive measurement of the sub-surface dielectric profile of Mars can be made by using multipath interference between reflections off the surface and subsurface dielectric discontinuities. This measurement has the ability to reveal changes in the soil water content, the depth of a layer of sand, thickness of a layer of ice, and even identify centimeter-scale layering which may indicate the presence of a sedimentary bed. We have created a numerical ray tracing model to understand the potential of using multipath interference techniques to investigate the sub-surface dielectric properties and structure of Mars. We have further verified this model using layered beds of sand and concrete in laboratory experiments and then used the model to extrapolate how this technique may be applied to future Mars missions. We will present new results demonstrating how to characterize a multipath interference patterns as a function of frequency and/or incidence angle to measure the thickness of a dielectric layer of sand or ice. Our results demonstrate that dielectric discontinuities in the subsurface can be measured using this passive sensing technique and it could be used to effectively measure the thickness of a dielectric layer in the proximity of a landed spacecraft. In the case of an orbiter, we believe this technique would be effective at measuring the seasonal thickness of CO2 ice in the Polar Regions. This is exciting because our method can produce similar results to traditional ground penetrating radars without the need to have an active radar transmitter in-situ. Therefore, it is possible that future telecommunications systems can serve as both a radio and a scientific instrument when used in conjunction with the Deep Space Network, a huge potential cost-savings for interplanetary missions.
Exposure assessment procedures in presence of wideband digital wireless networks.
Trinchero, D
2009-12-01
The article analyses the applicability of traditional methods, as well as recently proposed techniques, to the exposure assessment of electromagnetic field generated by wireless transmitters. As is well known, a correct measurement of the electromagnetic field is conditioned by the complexity of the signal, which requires dedicated instruments or specifically developed extrapolation techniques. Nevertheless, it is also influenced by the typology of the deployment of the transmitting and receiving stations, which varies from network to network. These aspects have been intensively analysed in the literature and several cases of study are available for review. The present article collects the most recent analyses and discusses their applicability to different scenarios, typical of the main wireless networking applications: broadcasting services, mobile cellular networks and data access provisioning infrastructures.
Estimation of bias and variance of measurements made from tomography scans
NASA Astrophysics Data System (ADS)
Bradley, Robert S.
2016-09-01
Tomographic imaging modalities are being increasingly used to quantify internal characteristics of objects for a wide range of applications, from medical imaging to materials science research. However, such measurements are typically presented without an assessment being made of their associated variance or confidence interval. In particular, noise in raw scan data places a fundamental lower limit on the variance and bias of measurements made on the reconstructed 3D volumes. In this paper, the simulation-extrapolation technique, which was originally developed for statistical regression, is adapted to estimate the bias and variance for measurements made from a single scan. The application to x-ray tomography is considered in detail and it is demonstrated that the technique can also allow the robustness of automatic segmentation strategies to be compared.
Fitting Richards' curve to data of diverse origins
Johnson, D.H.; Sargeant, A.B.; Allen, S.H.
1975-01-01
Published techniques for fitting data to nonlinear growth curves are briefly reviewed, most techniques require knowledge of the shape of the curve. A flexible growth curve developed by Richards (1959) is discussed as an alternative when the shape is unknown. The shape of this curve is governed by a specific parameter which can be estimated from the data. We describe in detail the fitting of a diverse set of longitudinal and cross-sectional data to Richards' growth curve for the purpose of determining the age of red fox (Vulpes vulpes) pups on the basis of right hind foot length. The fitted curve is found suitable for pups less than approximately 80 days old. The curve is extrapolated to pre-natal growth and shown to be appropriate only for about 10 days prior to birth.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
NASA Technical Reports Server (NTRS)
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
The Extrapolation of High Altitude Solar Cell I(V) Characteristics to AM0
NASA Technical Reports Server (NTRS)
Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Reinke, William; Blankenship, Kurt; Demers, James
2007-01-01
The high altitude aircraft method has been used at NASA GRC since the early 1960's to calibrate solar cell short circuit current, ISC, to Air Mass Zero (AMO). This method extrapolates ISC to AM0 via the Langley plot method, a logarithmic extrapolation to 0 air mass, and includes corrections for the varying Earth-Sun distance to 1.0 AU and compensating for the non-uniform ozone distribution in the atmosphere. However, other characteristics of the solar cell I(V) curve do not extrapolate in the same way. Another approach is needed to extrapolate VOC and the maximum power point (PMAX) to AM0 illumination. As part of the high altitude aircraft method, VOC and PMAX can be obtained as ISC changes during the flight. These values can then the extrapolated, sometimes interpolated, to the ISC(AM0) value. This approach should be valid as long as the shape of the solar spectra in the stratosphere does not change too much from AMO. As a feasibility check, the results are compared to AMO I(V) curves obtained using the NASA GRC X25 based multi-source simulator. This paper investigates the approach on both multi-junction solar cells and sub-cells.
Space Station transition through Spacelab
NASA Technical Reports Server (NTRS)
Craft, Harry G., Jr.; Wicks, Thomas G.
1990-01-01
It is appropriate that NASA's Office of Space Science and Application's science management structures and processes that have proven successful on Spacelab be applied and extrapolated to Space Station utilization, wherever practical. Spacelab has many similarities and complementary aspects to Space Station Freedom. An understanding of the similarities and differences between Spacelab and Space Station is necessary in order to understand how to transition from Spacelab to Space Station. These relationships are discussed herein as well as issues which must be dealt with and approaches for transition and evolution from Spacelab to Space Station.
Density functional Theory Based Generalized Effective Fragment Potential Method (Postprint)
2014-07-01
is acceptable for other applications) leads to induced dipole moments within 10−6 to 10−7 au of the precise values . Thus, the applied field of 10−4...noncovalent interactions. The water-benzene clusters17 and WATER2711 reference values were also ob- tained at the CCSD(T)/CBS level, except for the clusters...with n = 20,42 where MP2/CBS was used. The n-alkane dimers18 benchmark values were CCSD(T)/CBS for ethane to butane and a linear extrapolation method
Generalized Gilat-Raubenheimer method for density-of-states calculation in photonic crystals
NASA Astrophysics Data System (ADS)
Liu, Boyuan; Johnson, Steven G.; Joannopoulos, John D.; Lu, Ling
2018-04-01
An efficient numerical algorithm is the key for accurate evaluation of density of states (DOS) in band theory. The Gilat-Raubenheimer (GR) method proposed in 1966 is an efficient linear extrapolation method which was limited in specific lattices. Here, using an affine transformation, we provide a new generalization of the original GR method to any Bravais lattices and show that it is superior to the tetrahedron method and the adaptive Gaussian broadening method. Finally, we apply our generalized GR method to compute DOS of various gyroid photonic crystals of topological degeneracies.
The role of compressional viscoelasticity in the lubrication of rolling contacts.
NASA Technical Reports Server (NTRS)
Harrison, G.; Trachman, E. G.
1972-01-01
A simple model for the time-dependent volume response of a liquid to an applied pressure step is used to calculate the variation with rolling speed of the traction coefficient in a rolling contact system. Good agreement with experimental results is obtained at rolling speeds above 50 in/sec. At lower rolling speeds a very rapid change in the effective viscosity of the lubricant is predicted. This behavior, in conjunction with shear rate effects, is shown to lead to large errors when experimental data are extrapolated to zero rolling speed.
Low temperature measurement of the vapor pressures of planetary molecules
NASA Technical Reports Server (NTRS)
Kraus, George F.
1989-01-01
Interpretation of planetary observations and proper modeling of planetary atmospheres are critically upon accurate laboratory data for the chemical and physical properties of the constitutes of the atmospheres. It is important that these data are taken over the appropriate range of parameters such as temperature, pressure, and composition. Availability of accurate, laboratory data for vapor pressures and equilibrium constants of condensed species at low temperatures is essential for photochemical and cloud models of the atmospheres of the outer planets. In the absence of such data, modelers have no choice but to assume values based on an educated guess. In those cases where higher temperature data are available, a standard procedure is to extrapolate these points to the lower temperatures using the Clausius-Clapeyron equation. Last summer the vapor pressures of acetylene (C2H2) hydrogen cyanide (HCN), and cyanoacetylene (HC3N) was measured using two different methods. At the higher temperatures 1 torr and 10 torr capacitance manometers were used. To measure very low pressures, a technique was used which is based on the infrared absorption of thin film (TFIR). This summer the vapor pressure of acetylene was measured the TFIR method. The vapor pressure of hydrogen sulfide (H2S) was measured using capacitance manometers. Results for H2O agree with literature data over the common range of temperature. At the lower temperatures the data lie slightly below the values predicted by extrapolation of the Clausius-Clapeyron equation. Thin film infrared (TFIR) data for acetylene lie significantly below the values predicted by extrapolation. It is hoped to bridge the gap between the low end of the CM data and the upper end of the TFIR data in the future using a new spinning rotor gauge.
Occupancy schedules learning process through a data mining framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Oca, Simona; Hong, Tianzhen
Building occupancy is a paramount factor in building energy simulations. Specifically, lighting, plug loads, HVAC equipment utilization, fresh air requirements and internal heat gain or loss greatly depends on the level of occupancy within a building. Developing the appropriate methodologies to describe and reproduce the intricate network responsible for human-building interactions are needed. Extrapolation of patterns from big data streams is a powerful analysis technique which will allow for a better understanding of energy usage in buildings. A three-step data mining framework is applied to discover occupancy patterns in office spaces. First, a data set of 16 offices with 10more » minute interval occupancy data, over a two year period is mined through a decision tree model which predicts the occupancy presence. Then a rule induction algorithm is used to learn a pruned set of rules on the results from the decision tree model. Finally, a cluster analysis is employed in order to obtain consistent patterns of occupancy schedules. Furthermore, the identified occupancy rules and schedules are representative as four archetypal working profiles that can be used as input to current building energy modeling programs, such as EnergyPlus or IDA-ICE, to investigate impact of occupant presence on design, operation and energy use in office buildings.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bockris, J.O.; Devanathan, M.A.V.
The galvanostatic double charging method was applied to determine the coverage of Ni cathodes with adsorbed atomic H in 2 N NaOH solutions. Anodic current densities were varied from 0.05 to 1.8 amp/sq cm. The plateau indicating absence of readsorption was between 0.6 and 1.8 amp/sq cm, for a constant cathodic c.d. of 1/10,000 amp/sq cm. The variation of the adsorbed H over cathodic c.d.'s ranging from 10 to the -6th power to 1/10 at a constant anodic c.d. of 1 amp/sq cm were calculated and the coverage calculated. The mechanism of the H evolution reaction was elucidated. The ratemore » determining step is discharge from a water molecules followed by rapid Tafel recombination. The rate constants for these processes and the rate constant for the ionisation, calculated with the extrapolated value of coverage for the reversible H electrode, were determined. A modification of the Tafel equation which takes into account both coverage and ionisation is in harmony with the results. A new method for the determination of coverage suitable for corrodible metals is described which involves the measurement of the rate of permeation of H by electrochemical techniques which enhances the sensitivity of the method. (Author)« less
A new method of presentation the large-scale magnetic field structure on the Sun and solar corona
NASA Technical Reports Server (NTRS)
Ponyavin, D. I.
1995-01-01
The large-scale photospheric magnetic field, measured at Stanford, has been analyzed in terms of surface harmonics. Changes of the photospheric field which occur within whole solar rotation period can be resolved by this analysis. For this reason we used daily magnetograms of the line-of-sight magnetic field component observed from Earth over solar disc. We have estimated the period during which day-to-day full disc magnetograms must be collected. An original algorithm was applied to resolve time variations of spherical harmonics that reflect time evolution of large-scale magnetic field within solar rotation period. This method of magnetic field presentation can be useful enough in lack of direct magnetograph observations due to sometimes bad weather conditions. We have used the calculated surface harmonics to reconstruct the large-scale magnetic field structure on the source surface near the sun - the origin of heliospheric current sheet and solar wind streams. The obtained results have been compared with spacecraft in situ observations and geomagnetic activity. We tried to show that proposed technique can trace shon-time variations of heliospheric current sheet and short-lived solar wind streams. We have compared also our results with those obtained traditionally from potential field approximation and extrapolation using synoptic charts as initial boundary conditions.
Primary standardization of 57Co.
Koskinas, Marina F; Moreira, Denise S; Yamazaki, Ione M; de Toledo, Fábio; Brancaccio, Franco; Dias, Mauro S
2010-01-01
This work describes the method developed by the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, for the standardization of a (57)Co radioactive solution. Cobalt-57 is a radionuclide used for calibrating gamma-ray and X-ray spectrometers, as well as a gamma reference source for dose calibrators used in nuclear medicine services. Two 4pibeta-gamma coincidence systems were used to perform the standardization, the first used a 4pi(PC) counter coupled to a pair of 76 mm x 76 mm NaI(Tl) scintillators for detecting gamma-rays, the other one used a HPGe spectrometer for gamma detection. The measurements were performed by selecting a gamma-ray window comprising the (122 keV+136 keV) total absorption energy peaks in the NaI(Tl) and selecting the total absorption peak of 122 keV in the germanium detector. The electronic system used the TAC method developed at LMN for registering the observed events. The methodology recently developed by the LMN for simulating all detection processes in a 4pibeta-gamma coincidence system, by means of the Monte Carlo technique, was applied and the behavior of extrapolation curve compared to experimental data. The final activity obtained by the Monte Carlo calculation agrees with the experimental results within the experimental uncertainty. Copyright 2009 Elsevier Ltd. All rights reserved.
Vigliaturo, Ruggero; Capella, Silvana; Rinaudo, Caterina; Belluso, Elena
2016-07-01
The purpose of this work is to define a sample preparation protocol that allows inorganic fibers and particulate matter extracted from different biological samples to be characterized morphologically, crystallographically and chemically by transmission electron microscopy-energy dispersive spectroscopy (TEM-EDS). The method does not damage or create artifacts through chemical attacks of the target material. A fairly rapid specimen preparation is applied with the aim of performing as few steps as possible to transfer the withdrawn inorganic matter onto the TEM grid. The biological sample is previously digested chemically by NaClO. The salt is then removed through a series of centrifugation and rinse cycles in deionized water, thus drastically reducing the digestive power of the NaClO and concentrating the fibers for TEM analysis. The concept of equivalent hydrodynamic diameter is introduced to calculate the settling velocity during the centrifugation cycles. This technique is applicable to lung tissues and can be extended to a wide range of organic materials. The procedure does not appear to cause morphological damage to the fibers or modify their chemistry or degree of crystallinity. The extrapolated data can be used in interdisciplinary studies to understand the pathological effects caused by inorganic materials.
Single crystal EPR determination of the quantum energy level structure for Fe8 molecular clusters
NASA Astrophysics Data System (ADS)
Maccagnano, S.; Hill, S.; Negusse, E.; Lussier, A.; Mola, M. M.; Achey, R.; Dalal, N. S.
2001-05-01
Using a high sensitivity resonance cavity technique,^1 we are able to obtain high field/frequency (up to 9 tesla/210 GHz) EPR spectra for oriented single crystals of [Fe_8O_2(OH)_12(tacn)_6]Br_8.9H_2O (or Fe8 for short). Extrapolating the frequency dependence of transitions to zero-field (for any orientation of the field) allows us to directly, and accurately (to within 0.5 percent), determine the first five zero-field splittings, which are in reasonable agreement with recent inelastic neutron studies.^2 The dependence of these splittings on the applied field strength, and its orientation with respect to the crystal, enables us to identify (to within 1^o) the easy, intermediate and hard magnetic axes. Subsequent analysis of EPR spectra for field parallel to the easy axis yields a value of for gz which is appreciably different from the value assumed in a recent high field EPR study by Barra et al.^3 ^1 M.M. Mola, S. Hill, P. Goy, and M. Gross, Rev. Sci. Inst. 71, 186 (2000). ^2 R. Caciuffo, G. Amoretti, R. Sessoli, A. Caneschi, and D. Gatteschi, Phys. Rev. Lett. 81, 4744 (1998). ^3 A. L. Barra, D. Gatteschi, and R. Sessoli, cond?mat/0002386 (Feb, 2000).
Predictive QSAR modeling workflow, model applicability domains, and virtual screening.
Tropsha, Alexander; Golbraikh, Alexander
2007-01-01
Quantitative Structure Activity Relationship (QSAR) modeling has been traditionally applied as an evaluative approach, i.e., with the focus on developing retrospective and explanatory models of existing data. Model extrapolation was considered if only in hypothetical sense in terms of potential modifications of known biologically active chemicals that could improve compounds' activity. This critical review re-examines the strategy and the output of the modern QSAR modeling approaches. We provide examples and arguments suggesting that current methodologies may afford robust and validated models capable of accurate prediction of compound properties for molecules not included in the training sets. We discuss a data-analytical modeling workflow developed in our laboratory that incorporates modules for combinatorial QSAR model development (i.e., using all possible binary combinations of available descriptor sets and statistical data modeling techniques), rigorous model validation, and virtual screening of available chemical databases to identify novel biologically active compounds. Our approach places particular emphasis on model validation as well as the need to define model applicability domains in the chemistry space. We present examples of studies where the application of rigorously validated QSAR models to virtual screening identified computational hits that were confirmed by subsequent experimental investigations. The emerging focus of QSAR modeling on target property forecasting brings it forward as predictive, as opposed to evaluative, modeling approach.
Occupancy schedules learning process through a data mining framework
D'Oca, Simona; Hong, Tianzhen
2014-12-17
Building occupancy is a paramount factor in building energy simulations. Specifically, lighting, plug loads, HVAC equipment utilization, fresh air requirements and internal heat gain or loss greatly depends on the level of occupancy within a building. Developing the appropriate methodologies to describe and reproduce the intricate network responsible for human-building interactions are needed. Extrapolation of patterns from big data streams is a powerful analysis technique which will allow for a better understanding of energy usage in buildings. A three-step data mining framework is applied to discover occupancy patterns in office spaces. First, a data set of 16 offices with 10more » minute interval occupancy data, over a two year period is mined through a decision tree model which predicts the occupancy presence. Then a rule induction algorithm is used to learn a pruned set of rules on the results from the decision tree model. Finally, a cluster analysis is employed in order to obtain consistent patterns of occupancy schedules. Furthermore, the identified occupancy rules and schedules are representative as four archetypal working profiles that can be used as input to current building energy modeling programs, such as EnergyPlus or IDA-ICE, to investigate impact of occupant presence on design, operation and energy use in office buildings.« less
NASA Astrophysics Data System (ADS)
Wittwer, D.; Abdullin, F. Sh.; Aksenov, N. V.; Albin, Yu. V.; Bozhikov, G. A.; Dmitriev, S. N.; Dressler, R.; Eichler, R.; Gäggeler, H. W.; Henderson, R. A.; Hübener, S.; Kenneally, J. M.; Lebedev, V. Ya.; Lobanov, Yu. V.; Moody, K. J.; Oganessian, Yu. Ts.; Petrushkin, O. V.; Polyakov, A. N.; Piguet, D.; Rasmussen, P.; Sagaidak, R. N.; Serov, A.; Shirokovsky, I. V.; Shaughnessy, D. A.; Shishkin, S. V.; Sukhov, A. M.; Stoyer, M. A.; Stoyer, N. J.; Tereshatov, E. E.; Tsyganov, Yu. S.; Utyonkov, V. K.; Vostokin, G. K.; Wegrzecki, M.; Wilk, P. A.
2010-01-01
Currently, gas phase chemistry experiments with heaviest elements are usually performed with the gas-jet technique with the disadvantage that all reaction products are collected in a gas-filled thermalisation chamber adjacent to the target. The incorporation of a physical preseparation device between target and collection chamber opens up the perspective to perform new chemical studies. But this approach requires detailed knowledge of the stopping force (STF) of the heaviest elements in various materials. Measurements of the energy loss of mercury (Hg), radon (Rn), and nobelium (No) in Mylar and argon (Ar) were performed at low kinetic energies of around (40-270) keV per nucleon. The experimentally obtained values were compared with STF calculations of the commonly used program for calculating stopping and ranges of ions in matter (SRIM). Using the obtained data points an extrapolation of the STF up to element 114, eka-lead, in the same stopping media was carried out. These estimations were applied to design and to perform a first chemical experiment with a superheavy element behind a physical preseparator using the nuclear fusion reaction 244Pu( 48Ca; 3n) 289114. One decay chain assigned to an atom of 285112, the α-decay product of 289114, was observed.
NASA Astrophysics Data System (ADS)
Yi, J.; Choi, C.
2014-12-01
Rainfall observation and forecasting using remote sensing such as RADAR(Radio Detection and Ranging) and satellite images are widely used to delineate the increased damage by rapid weather changeslike regional storm and flash flood. The flood runoff was calculated by using adaptive neuro-fuzzy inference system, the data driven models and MAPLE(McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation) forecasted precipitation data as the input variables.The result of flood estimation method using neuro-fuzzy technique and RADAR forecasted precipitation data was evaluated by comparing it with the actual data.The Adaptive Neuro Fuzzy method was applied to the Chungju Reservoir basin in Korea. The six rainfall events during the flood seasons in 2010 and 2011 were used for the input data.The reservoir inflow estimation results were comparedaccording to the rainfall data used for training, checking and testing data in the model setup process. The results of the 15 models with the combination of the input variables were compared and analyzed. Using the relatively larger clustering radius and the biggest flood ever happened for training data showed the better flood estimation in this study.The model using the MAPLE forecasted precipitation data showed better result for inflow estimation in the Chungju Reservoir.
A statistical framework for applying RNA profiling to chemical hazard detection.
Kostich, Mitchell S
2017-12-01
Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shuman, Nicholas S.; Miller, Thomas M.; Viggiano, Albert A.
Thermal rate constants and product branching fractions for electron attachment to CF{sub 3}Br and the CF{sub 3} radical have been measured over the temperature range 300-890 K, the upper limit being restricted by thermal decomposition of CF{sub 3}Br. Both measurements were made in Flowing Afterglow Langmuir Probe apparatuses; the CF{sub 3}Br measurement was made using standard techniques, and the CF{sub 3} measurement using the Variable Electron and Neutral Density Attachment Mass Spectrometry technique. Attachment to CF{sub 3}Br proceeds exclusively by the dissociative channel yielding Br{sup -}, with a rate constant increasing from 1.1 Multiplication-Sign 10{sup -8} cm{sup 3} s{sup -1}more » at 300 K to 5.3 Multiplication-Sign 10{sup -8} cm{sup 3} s{sup -1} at 890 K, somewhat lower than previous data at temperatures up to 777 K. CF{sub 3} attachment proceeds through competition between associative attachment yielding CF{sub 3}{sup -} and dissociative attachment yielding F{sup -}. Prior data up to 600 K showed the rate constant monotonically increasing, with the partial rate constant of the dissociative channel following Arrhenius behavior; however, extrapolation of the data using a recently proposed kinetic modeling approach predicted the rate constant to turn over at higher temperatures, despite being only {approx}5% of the collision rate. The current data agree well with the previous kinetic modeling extrapolation, providing a demonstration of the predictive capabilities of the approach.« less
2016-04-01
incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching architecture...Simulation Model, Quasi -Nonlinear, Piloted Simulation, Flight-Test Implications, System Identification, Off-Nominal Loading Extrapolation, Stability...incorporated with nonlinear elements to produce a continuous, quasi -nonlinear simulation model. Extrapolation methods within the model stitching
Pole-strength of the earth from Magsat and magnetic determination of the core radius
NASA Technical Reports Server (NTRS)
Voorhies, G. V.; Benton, E. R.
1982-01-01
A model based on two days of Magsat data is used to numerically evaluate the unsigned magnetic flux linking the earth's surface, and a comparison of the 16.054 GWb value calculated with values from earlier geomagnetic field models reveals a smooth, monotonic, and recently-accelerating decrease in the earth's pole strength at a 50-year average rate of 8.3 MWb, or 0.052%/year. Hide's (1978) magnetic technique for determining the radius of the earth's electrically-conducting core is tested by (1) extrapolating main field models for 1960 and 1965 downward through the nearly-insulating mantle, and then separately comparing them to equivalent, extrapolated models of Magsat data. The two unsigned fluxes are found to equal the Magsat values at a radius which is within 2% of the core radius; and (2) the 1960 main field and secular variation and acceleration coefficients are used to derive models of 1930, 1940 and 1950. The same core magnetic radius value, within 2% of the seismic value, is obtained. It is concluded that the mantle is a nearly-perfect insulator, while the core is a perfect conductor, on the decade time scale.
Limits to the Fraction of High-energy Photon Emitting Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Akerlof, Carl W.; Zheng, WeiKang
2013-02-01
After almost four years of operation, the two instruments on board the Fermi Gamma-ray Space Telescope have shown that the number of gamma-ray bursts (GRBs) with high-energy photon emission above 100 MeV cannot exceed roughly 9% of the total number of all such events, at least at the present detection limits. In a recent paper, we found that GRBs with photons detected in the Large Area Telescope have a surprisingly broad distribution with respect to the observed event photon number. Extrapolation of our empirical fit to numbers of photons below our previous detection limit suggests that the overall rate of such low flux events could be estimated by standard image co-adding techniques. In this case, we have taken advantage of the excellent angular resolution of the Swift mission to provide accurate reference points for 79 GRB events which have eluded any previous correlations with high-energy photons. We find a small but significant signal in the co-added field. Guided by the extrapolated power-law fit previously obtained for the number distribution of GRBs with higher fluxes, the data suggest that only a small fraction of GRBs are sources of high-energy photons.
Area, length and thickness conservation: Dogma or reality?
NASA Astrophysics Data System (ADS)
Moretti, Isabelle; Callot, Jean Paul
2012-08-01
The basic assumption of quantitative structural geology is the preservation of material during deformation. However the hypothesis of volume conservation alone does not help to predict past or future geometries and so this assumption is usually translated into bed length in 2D (or area in 3D) and thickness conservation. When subsurface data are missing, geologists may extrapolate surface data to depth using the kink-band approach. These extrapolations, preserving both thicknesses and dips, lead to geometries which are restorable but often erroneous, due to both disharmonic deformation and internal deformation of layers. First, the Bolivian Sub-Andean Zone case is presented to highlight the evolution of the concepts on which balancing is based, and the important role played by a decoupling level in enhancing disharmony. Second, analogue models are analyzed to test the validity of the balancing techniques. Chamberlin's excess area approach is shown to be on average valid. However, neither the length nor the thicknesses are preserved. We propose that in real cases, the length preservation hypothesis during shortening could also be a wrong assumption. If the data are good enough to image the decollement level, the Chamberlin excess area method could be used to compute the bed length changes.
NASA Astrophysics Data System (ADS)
Hill, J. Grant; Peterson, Kirk A.; Knizia, Gerald; Werner, Hans-Joachim
2009-11-01
Accurate extrapolation to the complete basis set (CBS) limit of valence correlation energies calculated with explicitly correlated MP2-F12 and CCSD(T)-F12b methods have been investigated using a Schwenke-style approach for molecules containing both first and second row atoms. Extrapolation coefficients that are optimal for molecular systems containing first row elements differ from those optimized for second row analogs, hence values optimized for a combined set of first and second row systems are also presented. The new coefficients are shown to produce excellent results in both Schwenke-style and equivalent power-law-based two-point CBS extrapolations, with the MP2-F12/cc-pV(D,T)Z-F12 extrapolations producing an average error of just 0.17 mEh with a maximum error of 0.49 for a collection of 23 small molecules. The use of larger basis sets, i.e., cc-pV(T,Q)Z-F12 and aug-cc-pV(Q,5)Z, in extrapolations of the MP2-F12 correlation energy leads to average errors that are smaller than the degree of confidence in the reference data (˜0.1 mEh). The latter were obtained through use of very large basis sets in MP2-F12 calculations on small molecules containing both first and second row elements. CBS limits obtained from optimized coefficients for conventional MP2 are only comparable to the accuracy of the MP2-F12/cc-pV(D,T)Z-F12 extrapolation when the aug-cc-pV(5+d)Z and aug-cc-pV(6+d)Z basis sets are used. The CCSD(T)-F12b correlation energy is extrapolated as two distinct parts: CCSD-F12b and (T). While the CCSD-F12b extrapolations with smaller basis sets are statistically less accurate than those of the MP2-F12 correlation energies, this is presumably due to the slower basis set convergence of the CCSD-F12b method compared to MP2-F12. The use of larger basis sets in the CCSD-F12b extrapolations produces correlation energies with accuracies exceeding the confidence in the reference data (also obtained in large basis set F12 calculations). It is demonstrated that the use of the 3C(D) Ansatz is preferred for MP2-F12 CBS extrapolations. Optimal values of the geminal Slater exponent are presented for the diagonal, fixed amplitude Ansatz in MP2-F12 calculations, and these are also recommended for CCSD-F12b calculations.
Apparent-Strain Correction for Combined Thermal and Mechanical Testing
NASA Technical Reports Server (NTRS)
Johnson, Theodore F.; O'Neil, Teresa L.
2007-01-01
Combined thermal and mechanical testing requires that the total strain be corrected for the coefficient of thermal expansion mismatch between the strain gage and the specimen or apparent strain when the temperature varies while a mechanical load is being applied. Collecting data for an apparent strain test becomes problematic as the specimen size increases. If the test specimen cannot be placed in a variable temperature test chamber to generate apparent strain data with no mechanical loads, coupons can be used to generate the required data. The coupons, however, must have the same strain gage type, coefficient of thermal expansion, and constraints as the specimen to be useful. Obtaining apparent-strain data at temperatures lower than -320 F is challenging due to the difficulty to maintain steady-state and uniform temperatures on a given specimen. Equations to correct for apparent strain in a real-time fashion and data from apparent-strain tests for composite and metallic specimens over a temperature range from -450 F to +250 F are presented in this paper. Three approaches to extrapolate apparent-strain data from -320 F to -430 F are presented and compared to the measured apparent-strain data. The first two approaches use a subset of the apparent-strain curves between -320 F and 100 F to extrapolate to -430 F, while the third approach extrapolates the apparent-strain curve over the temperature range of -320 F to +250 F to -430 F. The first two approaches are superior to the third approach but the use of either of the first two approaches is contingent upon the degree of non-linearity of the apparent-strain curve.
Modeling low-dose mortality and disease incubation period of inhalational anthrax in the rabbit.
Gutting, Bradford W; Marchette, David; Sherwood, Robert; Andrews, George A; Director-Myska, Alison; Channel, Stephen R; Wolfe, Daniel; Berger, Alan E; Mackie, Ryan S; Watson, Brent J; Rukhin, Andrey
2013-07-21
There is a need to advance our ability to conduct credible human risk assessments for inhalational anthrax associated with exposure to a low number of bacteria. Combining animal data with computational models of disease will be central in the low-dose and cross-species extrapolations required in achieving this goal. The objective of the current work was to apply and advance the competing risks (CR) computational model of inhalational anthrax where data was collected from NZW rabbits exposed to aerosols of Ames strain Bacillus anthracis. An initial aim was to parameterize the CR model using high-dose rabbit data and then conduct a low-dose extrapolation. The CR low-dose attack rate was then compared against known low-dose rabbit data as well as the low-dose curve obtained when the entire rabbit dose-response data set was fitted to an exponential dose-response (EDR) model. The CR model predictions demonstrated excellent agreement with actual low-dose rabbit data. We next used a modified CR model (MCR) to examine disease incubation period (the time to reach a fever >40 °C). The MCR model predicted a germination period of 14.5h following exposure to a low spore dose, which was confirmed by monitoring spore germination in the rabbit lung using PCR, and predicted a low-dose disease incubation period in the rabbit between 14.7 and 16.8 days. Overall, the CR and MCR model appeared to describe rabbit inhalational anthrax well. These results are discussed in the context of conducting laboratory studies in other relevant animal models, combining the CR/MCR model with other computation models of inhalational anthrax, and using the resulting information towards extrapolating a low-dose response prediction for man. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Jahn, S.; Haigis, V.; Salanne, M.
2011-12-01
Thermal conductivity is an important physical parameter that controls the heat flow in the Earth's core and mantle. The heat flow from the core to the mantle influences mantle dynamics and the convective regime of the liquid outer core, which drives the geodynamo. Although thermal conductivities of important mantle minerals at ambient pressure are well-known (Hofmeister, 1999), experimentalists encounter major difficulties to measure thermal conductivities at high pressures and temperatures. Extrapolations of experimental data to high pressures have a large uncertainty and hence the heat transport in minerals at conditions of the deep mantle is not well constrained. Recently, the thermal conductivity of MgO at lower mantle conditions was computed from first-principles simulations (e.g. de Koker (2009), Stackhouse et al. (2010)). Here, we used classical molecular dynamics to calculate thermal conductivities of MgO and MgSiO3 in the perovskite and post-perovskite structures at different pressures and temperatures. The interactions between atoms were treated by an advanced ionic interaction model which was shown to describe the behavior of materials reliably within a wide pressure and temperature range (Jahn & Madden, 2007). Two alternative techniques were used and compared. In non-equilibrium MD, an energy flow is imposed on the system, and the thermal conductivity is taken to be inversely proportional to the temperature gradient that builds up in response to this flow. The other technique (which is still too expensive for first principles methods) uses standard equilibrium MD and extracts the thermal conductivity from energy current correlation functions, according to the Green-Kubo formula. As a benchmark for the interaction potential, we calculated the thermal conductivity of fcc MgO at 2000K and 149GPa, where data from ab-initio non-equilibrium MD are available (Stackhouse et al., 2010). The results agree within the error bars, which justifies the use of the model for the calculation of thermal conductivities. However, with the non-equilibrium technique, the conductivity depends strongly on the size of the simulation box. Therefore, a scaling to infinite system size has to be applied, which introduces some uncertainty to the final result. The equilibrium MD method, on the other hand, seems to be less sensitive to finite-size effects. We will present computed thermal conductivities of MgO and MgSiO3 in the perovskite and post-perovskite structures at 138 GPa and temperatures of 300 K and 3000 K, the latter corresponding to conditions in the D'' layer. This allows an assessment of the extrapolations to high pressures and temperatures used in the literature. Jahn S & Madden PA (2007) Phys. Earth Planet. Int. 162, 129 de Koker N (2009) Phys. Rev. Lett. 103, 125902 Hofmeister AM (1999) Science 283, 1699 Stackhouse S et al. (2010) Phys. Rev. Lett. 104, 208501
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fry, R.J.M.
The author discusses some examples of how different experimental animal systems have helped to answer questions about the effects of radiation, in particular, carcinogenesis, and to indicate how the new experimental model systems promise an even more exciting future. Entwined in these themes will be observations about susceptibility and extrapolation across species. The hope of developing acceptable methods of extrapolation of estimates of the risk of radiogenic cancer increases as molecular biology reveals the trail of remarkable similarities in the genetic control of many functions common to many species. A major concern about even attempting to extrapolate estimates of risksmore » of radiation-induced cancer across species has been that the mechanisms of carcinogenesis were so different among different species that it would negate the validity of extrapolation. The more that has become known about the genes involved in cancer, especially those related to the initial events in carcinogenesis, the more have the reasons for considering methods of extrapolation across species increased.« less
Correlation energy extrapolation by many-body expansion
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...
2017-01-09
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Correlation energy extrapolation by many-body expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus
Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less
Polidori, David; Rowley, Clarence
2014-07-22
The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, J; Culberson, W; DeWerd, L
Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate themore » absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation chamber to provide accurate dose rate measurements for a planar ophthalmic applicator.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kassouf, Amine, E-mail: amine.kassouf@agroparistech.fr; INRA, UMR1145 Ingénierie Procédés Aliments, 1 Avenue des Olympiades, 91300 Massy; AgroParisTech, UMR1145 Ingénierie Procédés Aliments, 16 rue Claude Bernard, 75005 Paris
2014-11-15
Highlights: • An innovative technique, MIR-ICA, was applied to plastic packaging separation. • This study was carried out on PE, PP, PS, PET and PLA plastic packaging materials. • ICA was applied to discriminate plastics and 100% separation rates were obtained. • Analyses performed on two spectrometers proved the reproducibility of the method. • MIR-ICA is a simple and fast technique allowing plastic identification/classification. - Abstract: Plastic packaging wastes increased considerably in recent decades, raising a major and serious public concern on political, economical and environmental levels. Dealing with this kind of problems is generally done by landfilling and energymore » recovery. However, these two methods are becoming more and more expensive, hazardous to the public health and the environment. Therefore, recycling is gaining worldwide consideration as a solution to decrease the growing volume of plastic packaging wastes and simultaneously reduce the consumption of oil required to produce virgin resin. Nevertheless, a major shortage is encountered in recycling which is related to the sorting of plastic wastes. In this paper, a feasibility study was performed in order to test the potential of an innovative approach combining mid infrared (MIR) spectroscopy with independent components analysis (ICA), as a simple and fast approach which could achieve high separation rates. This approach (MIR-ICA) gave 100% discrimination rates in the separation of all studied plastics: polyethylene terephthalate (PET), polyethylene (PE), polypropylene (PP), polystyrene (PS) and polylactide (PLA). In addition, some more specific discriminations were obtained separating plastic materials belonging to the same polymer family e.g. high density polyethylene (HDPE) from low density polyethylene (LDPE). High discrimination rates were obtained despite the heterogeneity among samples especially differences in colors, thicknesses and surface textures. The reproducibility of the proposed approach was also tested using two spectrometers with considerable differences in their sensitivities. Discrimination rates were not affected proving that the developed approach could be extrapolated to different spectrometers. MIR combined with ICA is a promising tool for plastic waste separation that can help improve performance in this field; however further technological improvements and developments are required before it can be applied at an industrial level given that all tests presented here were performed under laboratory conditions.« less
NASA Technical Reports Server (NTRS)
Choi, S.; Joiner, J.; Choi, Y.; Duncan, B. N.; Bucsela, E.
2014-01-01
We derive free-tropospheric NO2 volume mixing ratios (VMRs) and stratospheric column amounts of NO2 by applying a cloud slicing technique to data from the Ozone Monitoring Instrument (OMI) on the Aura satellite. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud scene pressure is proportional to the NO2 VMR. In this work, we use a sample of nearby OMI pixel data from a single orbit for the linear fit. The OMI data include cloud scene pressures from the rotational-Raman algorithm and above-cloud NO2 vertical column density (VCD) (defined as the NO2 column from the cloud scene pressure to the top-of-the-atmosphere) from a differential optical absorption spectroscopy (DOAS) algorithm. Estimates of stratospheric column NO2 are obtained by extrapolating the linear fits to the tropopause. We compare OMI-derived NO2 VMRs with in situ aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. The agreement is generally within the estimated uncertainties when appropriate data screening is applied. We then derive a global seasonal climatology of free-tropospheric NO2 VMR in cloudy conditions. Enhanced NO2 in the free troposphere commonly appears near polluted urban locations where NO2 produced in the boundary layer may be transported vertically out of the boundary layer and then horizontally away from the source. Signatures of lightning NO2 are also shown throughout low and middle latitude regions in summer months. A profile analysis of our cloud slicing data indicates signatures of uplifted and transported anthropogenic NO2 in the middle troposphere as well as lightning-generated NO2 in the upper troposphere. Comparison of the climatology with simulations from the Global Modeling Initiative (GMI) for cloudy conditions (cloud optical thicknesses > 10) shows similarities in the spatial patterns of continental pollution outflow. However, there are also some differences in the seasonal variation of free-tropospheric NO2 VMRs near highly populated regions and in areas affected by lightning-generated NOx. Stratospheric column NO2 obtained from cloud slicing agrees well with other independently-generated estimates, providing further confidence in the free-tropospheric results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiegelmann, T.; Solanki, S. K.; Barthol, P.
Magneto-static models may overcome some of the issues facing force-free magnetic field extrapolations. So far they have seen limited use and have faced problems when applied to quiet-Sun data. Here we present a first application to an active region. We use solar vector magnetic field measurements gathered by the IMaX polarimeter during the flight of the Sunrise balloon-borne solar observatory in 2013 June as boundary conditions for a magneto-static model of the higher solar atmosphere above an active region. The IMaX data are embedded in active region vector magnetograms observed with SDO /HMI. This work continues our magneto-static extrapolation approach,more » which was applied earlier to a quiet-Sun region observed with Sunrise I. In an active region the signal-to-noise-ratio in the measured Stokes parameters is considerably higher than in the quiet-Sun and consequently the IMaX measurements of the horizontal photospheric magnetic field allow us to specify the free parameters of the model in a special class of linear magneto-static equilibria. The high spatial resolution of IMaX (110–130 km, pixel size 40 km) enables us to model the non-force-free layer between the photosphere and the mid-chromosphere vertically by about 50 grid points. In our approach we can incorporate some aspects of the mixed beta layer of photosphere and chromosphere, e.g., taking a finite Lorentz force into account, which was not possible with lower-resolution photospheric measurements in the past. The linear model does not, however, permit us to model intrinsic nonlinear structures like strongly localized electric currents.« less
NASA Astrophysics Data System (ADS)
Luke, Adam; Vrugt, Jasper A.; AghaKouchak, Amir; Matthew, Richard; Sanders, Brett F.
2017-07-01
Nonstationary extreme value analysis (NEVA) can improve the statistical representation of observed flood peak distributions compared to stationary (ST) analysis, but management of flood risk relies on predictions of out-of-sample distributions for which NEVA has not been comprehensively evaluated. In this study, we apply split-sample testing to 1250 annual maximum discharge records in the United States and compare the predictive capabilities of NEVA relative to ST extreme value analysis using a log-Pearson Type III (LPIII) distribution. The parameters of the LPIII distribution in the ST and nonstationary (NS) models are estimated from the first half of each record using Bayesian inference. The second half of each record is reserved to evaluate the predictions under the ST and NS models. The NS model is applied for prediction by (1) extrapolating the trend of the NS model parameters throughout the evaluation period and (2) using the NS model parameter values at the end of the fitting period to predict with an updated ST model (uST). Our analysis shows that the ST predictions are preferred, overall. NS model parameter extrapolation is rarely preferred. However, if fitting period discharges are influenced by physical changes in the watershed, for example from anthropogenic activity, the uST model is strongly preferred relative to ST and NS predictions. The uST model is therefore recommended for evaluation of current flood risk in watersheds that have undergone physical changes. Supporting information includes a MATLAB® program that estimates the (ST/NS/uST) LPIII parameters from annual peak discharge data through Bayesian inference.
Understanding Femtosecond-Pulse Laser Damage through Fundamental Physics Simulations
NASA Astrophysics Data System (ADS)
Mitchell, Robert A., III
It did not take long after the invention of the laser for the field of laser damage to appear. For several decades researchers have been studying how lasers damage materials, both for the basic scientific understanding of highly nonequilibrium processes as well as for industrial applications. Femtosecond pulse lasers create little collateral damage and a readily reproducible damage pattern. They are easily tailored to desired specifications and are particularly powerful and versatile tools, contributing even more industrial interest in the field. As with most long-standing fields of research, many theoretical tools have been developed to model the laser damage process, covering a wide range of complexities and regimes of applicability. However, most of the modeling methods developed are either too limited in spatial extent to model the full morphology of the damage crater, or incorporate only a small subset of the important physics and require numerous fitting parameters and assumptions in order to match values interpolated from experimental data. Demonstrated in this work is the first simulation method capable of fundamentally modeling the full laser damage process, from the laser interaction all the way through to the resolidification of the target, on a large enough scale that can capture the full morphology of the laser damage crater so as to be compared directly to experimental measurements instead of extrapolated values, and all without any fitting parameters. The design, implementation, and testing of this simulation technique, based on a modified version of the particle-in-cell (PIC) method, is presented. For a 60 fs, 1 mum wavelength laser pulse with fluences of 0.5 J/cm 2, 1.0 J/cm2, and 2.0 J/cm2 the resulting laser damage craters in copper are shown and, using the same technique applied to experimental crater morphologies, a laser damage fluence threshold is calculated of 0.15 J/cm2, consistent with current experiments performed under conditions similar to those in the simulation. Lastly, this method is applied to the phenomenon known as LIPSS, or Laser-Induced Periodic Surface Structures; a problem of fundamental importance that is also of great interest for industrial applications. While LIPSS have been observed for decades in laser damage experiments, the exact physical mechanisms leading to the periodic corrugation on the surface of a target have been highly debated, with no general consensus. Applying this technique to a situation known to create LIPSS in a single shot, the generation of this periodicity is observed, the wavelength of the damage is consistent with experimental measures and, due to the fundamental nature of the simulation method, the physical mechanisms behind LIPSS are examined. The mechanism behind LIPSS formation in the studied regime is shown to be the formation of and interference with an evanescent surface electromagnetic wave known as a surface plasmon-polariton. This shows that not only can this simulation technique model a basic laser damage situation, but it is also flexible and powerful enough to be applied to complex areas of research, allowing for new physical insight in regimes that are difficult to probe experimentally.
NASA Astrophysics Data System (ADS)
Schauwecker, Simone; Rohrer, Mario; Huggel, Christian; Salzmann, Nadine; Montoya, Nilton; Endries, Jason; Perry, Baker
2016-04-01
The snow line altitude, defined as the line separating snow from ice or firn surfaces, is among the most important parameters in the glacier mass and energy balance of tropical glaciers, since it determines net shortwave radiation via surface albedo. Therefore, hydroglaciological models require estimations of the melting layer during precipitation events, as well as parameterisations of the transient snow line. Typically, the height of the melting layer is implemented by simple air temperature extrapolation techniques, using data from nearby meteorological stations and constant lapse rates. Nonetheless, in the Peruvian mountain ranges, stations at the height of glacier tongues (>5000 m asl.) are scarce and the extrapolation techniques must use data from distant and much lower elevated stations, which need prior careful validation. Thus, reliable snowfall level and snow line altitude estimates from multiple data sets are necessary. Here, we assemble and analyse data from multiple sources (remote sensing, in-situ station data, reanalysis data) in order to assess their applicability in estimating both, the melting layer and snow line altitude. We especially focus on the potential of radar bright band data from TRMM and CloudSat satellite data for its use as a proxy for the snow/rain transition height. As expected for tropical regions, the seasonal and regional variability in the snow line altitude is comparatively low. During the course of the dry season, Landsat satellite as well as webcam images show that the transient snow line is generally increasing, interrupted by light snowfall or graupel events with low precipitation amounts and fast decay rates. We show limitations and possibilities of different data sources as well as their applicability to validate temperature extrapolation methods. Further on, we analyse the implications of the relatively low variability in seasonal snow line altitude on local glacier mass balance gradients. We show that the snow line altitude - ranging within only few hundreds of meters within one year - determines the observed high mass balance gradients. An increase in air temperature by for example 1°C during precipitation events may have even stronger impacts on glacier mass balances of tropical glacier than it would have on those of mid-latitude glaciers. This is an important reason for the high sensitivity of tropical glaciers on past and current climatic changes.
Pei, Jiquan; Han, Steve; Liao, Haijun; Li, Tao
2014-01-22
A highly efficient and simple-to-implement Monte Carlo algorithm is proposed for the evaluation of the Rényi entanglement entropy (REE) of the quantum dimer model (QDM) at the Rokhsar-Kivelson (RK) point. It makes possible the evaluation of REE at the RK point to the thermodynamic limit for a general QDM. We apply the algorithm to a QDM defined on the triangular and the square lattice in two dimensions and the simple and the face centered cubic (fcc) lattice in three dimensions. We find the REE on all these lattices follows perfect linear scaling in the thermodynamic limit, apart from an even-odd oscillation in the case of the square lattice. We also evaluate the topological entanglement entropy (TEE) with both a subtraction and an extrapolation procedure. We find the QDMs on both the triangular and the fcc lattice exhibit robust Z2 topological order. The expected TEE of ln2 is clearly demonstrated in both cases. Our large scale simulation also proves the recently proposed extrapolation procedure in cylindrical geometry to be a highly reliable way to extract the TEE of a topologically ordered system.
Coupled-cluster and explicitly correlated perturbation-theory calculations of the uracil anion.
Bachorz, Rafał A; Klopper, Wim; Gutowski, Maciej
2007-02-28
A valence-type anion of the canonical tautomer of uracil has been characterized using explicitly correlated second-order Moller-Plesset perturbation theory (RI-MP2-R12) in conjunction with conventional coupled-cluster theory with single, double, and perturbative triple excitations. At this level of electron-correlation treatment and after inclusion of a zero-point vibrational energy correction, determined in the harmonic approximation at the RI-MP2 level of theory, the valence anion is adiabatically stable with respect to the neutral molecule by 40 meV. The anion is characterized by a vertical detachment energy of 0.60 eV. To obtain accurate estimates of the vertical and adiabatic electron binding energies, a scheme was applied in which electronic energy contributions from various levels of theory were added, each of them extrapolated to the corresponding basis-set limit. The MP2 basis-set limits were also evaluated using an explicitly correlated approach, and the results of these calculations are in agreement with the extrapolated values. A remarkable feature of the valence anionic state is that the adiabatic electron binding energy is positive but smaller than the adiabatic electron binding energy of the dipole-bound state.
Bravo, Felipe; Hann, D.W.; Maguire, Douglas A.
2001-01-01
Mixed conifer and hardwood stands in southwestern Oregon were studied to explore the hypothesis that competition effects on individual-tree growth and survival will differ according to the species comprising the competition measure. Likewise, it was hypothesized that competition measures should extrapolate best if crown-based surrogates are given preference over diameter-based (basal area based) surrogates. Diameter growth and probability of survival were modeled for individual Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) trees growing in pure stands. Alternative models expressing one-sided and two-sided competition as a function of either basal area or crown structure were then applied to other plots in which Douglas-fir was mixed with other conifers and (or) hardwood species. Crown-based variables outperformed basal area based variables as surrogates for one-sided competition in both diameter growth and survival probability, regardless of species composition. In contrast, two-sided competition was best represented by total basal area of competing trees. Surrogates reflecting differences in crown morphology among species relate more closely to the mechanics of competition for light and, hence, facilitate extrapolation to species combinations for which no observations are available.
An evaluation of rise time characterization and prediction methods
NASA Technical Reports Server (NTRS)
Robinson, Leick D.
1994-01-01
One common method of extrapolating sonic boom waveforms from aircraft to ground is to calculate the nonlinear distortion, and then add a rise time to each shock by a simple empirical rule. One common rule is the '3 over P' rule which calculates the rise time in milliseconds as three divided by the shock amplitude in psf. This rule was compared with the results of ZEPHYRUS, a comprehensive algorithm which calculates sonic boom propagation and extrapolation with the combined effects of nonlinearity, attenuation, dispersion, geometric spreading, and refraction in a stratified atmosphere. It is shown there that the simple empirical rule considerably overestimates the rise time estimate. In addition, the empirical rule does not account for variations in the rise time due to humidity variation or propagation history. It is also demonstrated that the rise time is only an approximate indicator of perceived loudness. Three waveforms with identical characteristics (shock placement, amplitude, and rise time), but with different shock shapes, are shown to give different calculated loudness. This paper is based in part on work performed at the Applied Research Laboratories, the University of Texas at Austin, and supported by NASA Langley.
Allometry of visceral organs in living amniotes and its implications for sauropod dinosaurs
Franz, Ragna; Hummel, Jürgen; Kienzle, Ellen; Kölle, Petra; Gunga, Hanns-Christian; Clauss, Marcus
2009-01-01
Allometric equations are often used to extrapolate traits in animals for which only body mass estimates are known, such as dinosaurs. One important decision can be whether these equations should be based on mammal, bird or reptile data. To address whether this choice will have a relevant influence on reconstructions, we compared allometric equations for birds and mammals from the literature to those for reptiles derived from both published and hitherto unpublished data. Organs studied included the heart, kidneys, liver and gut, as well as gut contents. While the available data indicate that gut content mass does not differ between the clades, the organ masses for reptiles are generally lower than those for mammals and birds. In particular, gut tissue mass is significantly lower in reptiles. When applying the results in the reconstruction of a sauropod dinosaur, the estimated volume of the coelomic cavity greatly exceeds the estimated volume of the combined organ masses, irrespective of the allometric equation used. Therefore, substantial deviation of sauropod organ allometry from that of the extant vertebrates can be allowed conceptually. Extrapolations of retention times from estimated gut contents mass and food intake do not suggest digestive constraints on sauropod dinosaur body size. PMID:19324837
Chan, R W; Titze, I R
2000-01-01
The viscoelastic shear properties of human vocal fold mucosa (cover) were previously measured as a function of frequency [Chan and Titze, J. Acoust. Soc. Am. 106, 2008-2021 (1999)], but data were obtained only in a frequency range of 0.01-15 Hz, an order of magnitude below typical frequencies of vocal fold oscillation (on the order of 100 Hz). This study represents an attempt to extrapolate the data to higher frequencies based on two viscoelastic theories, (1) a quasilinear viscoelastic theory widely used for the constitutive modeling of the viscoelastic properties of biological tissues [Fung, Biomechanics (Springer-Verlag, New York, 1993), pp. 277-292], and (2) a molecular (statistical network) theory commonly used for the rheological modeling of polymeric materials [Zhu et al., J. Biomech. 24, 1007-1018 (1991)]. Analytical expressions of elastic and viscous shear moduli, dynamic viscosity, and damping ratio based on the two theories with specific model parameters were applied to curve-fit the empirical data. Results showed that the theoretical predictions matched the empirical data reasonably well, allowing for parametric descriptions of the data and their extrapolations to frequencies of phonation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moultos, Othonas A.; Economou, Ioannis G.; Zhang, Yong
Molecular dynamics simulations were carried out to study the self-diffusion coefficients of CO{sub 2}, methane, propane, n-hexane, n-hexadecane, and various poly(ethylene glycol) dimethyl ethers (glymes in short, CH{sub 3}O–(CH{sub 2}CH{sub 2}O){sub n}–CH{sub 3} with n = 1, 2, 3, and 4, labeled as G1, G2, G3, and G4, respectively) at different conditions. Various system sizes were examined. The widely used Yeh and Hummer [J. Phys. Chem. B 108, 15873 (2004)] correction for the prediction of diffusion coefficient at the thermodynamic limit was applied and shown to be accurate in all cases compared to extrapolated values at infinite system size. Themore » magnitude of correction, in all cases examined, is significant, with the smallest systems examined giving for some cases a self-diffusion coefficient approximately 15% lower than the infinite system-size extrapolated value. The results suggest that finite size corrections to computed self-diffusivities must be used in order to obtain accurate results.« less
Landsat Thematic Mapper monitoring of turbid inland water quality
NASA Technical Reports Server (NTRS)
Lathrop, Richard G., Jr.
1992-01-01
This study reports on an investigation of water quality calibration algorithms under turbid inland water conditions using Landsat Thematic Mapper (TM) multispectral digital data. TM data and water quality observations (total suspended solids and Secchi disk depth) were obtained near-simultaneously and related using linear regression techniques. The relationships between reflectance and water quality for Green Bay and Lake Michigan were compared with results for Yellowstone and Jackson Lakes, Wyoming. Results show similarities in the water quality-reflectance relationships, however, the algorithms derived for Green Bay - Lake Michigan cannot be extrapolated to Yellowstone and Jackson Lake conditions.
Laboratory simulation of field-aligned currents
NASA Technical Reports Server (NTRS)
Wessel, Frank J.; Rostoker, Norman
1993-01-01
A summary of progress during the period Apr. 1992 to Mar. 1993 is provided. Objectives of the research are (1) to simulate, via laboratory experiments, the three terms of the field-aligned current equation; (2) to simulate auroral-arc formation processes by configuring the boundary conditions of the experimental chamber and plasma parameters to produce highly localized return currents at the end of a field-aligned current system; and (3) to extrapolate these results, using theoretical and computational techniques, to the problem of magnetospheric-ionospheric coupling and to compare them with published literature signatures of auroral-arc phenomena.
Completing and Adapting Models of Biological Processes
NASA Technical Reports Server (NTRS)
Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard
2006-01-01
We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.
Islam, Saleem
2017-04-01
Achalasia is a rare neurogenic motility disorder of the esophagus, occurring in approximately 0.11 cases per 100,000 children. The combination of problems (aperistalsis, hypertensive lower esophageal sphincter (LES), and lack of receptive LES relaxation) results in patients having symptoms of progressive dysphagia, weight loss, and regurgitation. Treatment modalities have evolved over the past few decades from balloon dilation and botulinum toxin injection to laparoscopic Heller myotomy and endoscopic myotomy. Most data on achalasia management is extrapolated to children from adult experience. This article describes understanding of the pathogenesis and discusses newer therapeutic techniques as well as controversies in management. Copyright © 2017 Elsevier Inc. All rights reserved.
Advanced Computational Techniques for Hypersonic Propulsion
NASA Technical Reports Server (NTRS)
Povinelli, Louis A.
1996-01-01
CFD has played a major role in the resurgence of hypersonic flight, on the premise that numerical methods will allow us to perform simulations at conditions for which no ground test capability exists. Validation of CFD methods is being established using the experimental data base available, which is below Mach 8. It is important, however, to realize the limitations involved in the extrapolation process as well as the deficiencies that exist in numerical methods at the present time. Current features of CFD codes are examined for application to propulsion system components. The shortcomings in simulation and modeling are identified and discussed.
Sputtering of cobalt and chromium by argon and xenon ions near the threshold energy region
NASA Technical Reports Server (NTRS)
Handoo, A. K.; Ray, P. K.
1993-01-01
Sputtering yields of cobalt and chromium by argon and xenon ions with energies below 50 eV are reported. The targets were electroplated on copper substrates. Measurable sputtering yields were obtained from cobalt with ion energies as low as 10 eV. The ion beams were produced by an ion gun. A radioactive tracer technique was used for the quantitative measurement of the sputtering yield. Co-57 and Cr-51 were used as tracers. The yield-energy curves are observed to be concave, which brings into question the practice of finding threshold energies by linear extrapolation.
Absolute photon-flux measurements in the vacuum ultraviolet
NASA Technical Reports Server (NTRS)
Samson, J. A. R.; Haddad, G. N.
1974-01-01
Absolute photon-flux measurements in the vacuum ultraviolet have extended to short wavelengths by use of rare-gas ionization chambers. The technique involves the measurement of the ion current as a function of the gas pressure in the ion chamber. The true value of the ion current, and hence the absolute photon flux, is obtained by extrapolating the ion current to zero gas pressure. Examples are given at 162 and 266 A. The short-wavelength limit is determined only by the sensitivity of the current-measuring apparatus and by present knowledge of the photoionization processes that occur in the rate gases.
Extension of similarity test procedures to cooled engine components with insulating ceramic coatings
NASA Technical Reports Server (NTRS)
Gladden, H. J.
1980-01-01
Material thermal conductivity was analyzed for its effect on the thermal performance of air cooled gas turbine components, both with and without a ceramic thermal-barrier material, tested at reduced temperatures and pressures. The analysis shows that neglecting the material thermal conductivity can contribute significant errors when metal-wall-temperature test data taken on a turbine vane are extrapolated to engine conditions. This error in metal temperature for an uncoated vane is of opposite sign from that for a ceramic-coated vane. A correction technique is developed for both ceramic-coated and uncoated components.
Statistical summaries of fatigue data for design purposes
NASA Technical Reports Server (NTRS)
Wirsching, P. H.
1983-01-01
Two methods are discussed for constructing a design curve on the safe side of fatigue data. Both the tolerance interval and equivalent prediction interval (EPI) concepts provide such a curve while accounting for both the distribution of the estimators in small samples and the data scatter. The EPI is also useful as a mechanism for providing necessary statistics on S-N data for a full reliability analysis which includes uncertainty in all fatigue design factors. Examples of statistical analyses of the general strain life relationship are presented. The tolerance limit and EPI techniques for defining a design curve are demonstrated. Examples usng WASPALOY B and RQC-100 data demonstrate that a reliability model could be constructed by considering the fatigue strength and fatigue ductility coefficients as two independent random variables. A technique given for establishing the fatigue strength for high cycle lives relies on an extrapolation technique and also accounts for "runners." A reliability model or design value can be specified.
Passive infrared ice detection for helicopter applications
NASA Technical Reports Server (NTRS)
Dershowitz, Adam L.; Hansman, R. John, Jr.
1990-01-01
A technique is proposed to remotely detect rotor icing on helicopters by using passive IR thermometry to detect the warming caused by latent heat release as supercooled water freezes. During icing, the ice accretion region will be warmer than the uniced trailing edge, resulting in a characteristic chordwise temperature profile. Preliminary tests were conducted on a static model in the NASA Icing Research Tunnel for a variety of wet (glaze) and dry (rime) ice conditions. The chordwise temperature profiles were confirmed by observation with an IR thermal video system and thermocouple observations. The IR observations were consistent with predictions of the LEWICE ice accretion code, which was used to extrapolate the observations to rotor icing conditions. Based on the static observations, the passive IR ice detection technique appears promising; however, further testing or rotating blades is required.
NASA Astrophysics Data System (ADS)
Olsen, M. K.
2017-02-01
We propose and analyze a pumped and damped Bose-Hubbard dimer as a source of continuous-variable Einstein-Podolsky-Rosen (EPR) steering with non-Gaussian statistics. We use and compare the results of the approximate truncated Wigner and the exact positive-P representation to calculate and compare the predictions for intensities, second-order quantum correlations, and third- and fourth-order cumulants. We find agreement for intensities and the products of inferred quadrature variances, which indicate that states demonstrating the EPR paradox are present. We find clear signals of non-Gaussianity in the quantum states of the modes from both the approximate and exact techniques, with quantitative differences in their predictions. Our proposed experimental configuration is extrapolated from current experimental techniques and adds another apparatus to the current toolbox of quantum atom optics.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
NASA Astrophysics Data System (ADS)
Bourdet, Alice; Frouin, Robert J.
2014-11-01
The classic atmospheric correction algorithm, routinely applied to second-generation ocean-color sensors such as SeaWiFS, MODIS, and MERIS, consists of (i) estimating the aerosol reflectance in the red and near infrared (NIR) where the ocean is considered black (i.e., totally absorbing), and (ii) extrapolating the estimated aerosol reflectance to shorter wavelengths. The marine reflectance is then retrieved by subtraction. Variants and improvements have been made over the years to deal with non-null reflectance in the red and near infrared, a general situation in estuaries and the coastal zone, but the solutions proposed so far still suffer some limitations, due to uncertainties in marine reflectance modeling in the near infrared or difficulty to extrapolate the aerosol signal to the blue when using observations in the shortwave infrared (SWIR), a spectral range far from the ocean-color wavelengths. To estimate the marine signal (i.e., the product of marine reflectance and atmospheric transmittance) in the near infrared, the proposed approach is to decompose the aerosol reflectance in the near infrared to shortwave infrared into principal components. Since aerosol scattering is smooth spectrally, a few components are generally sufficient to represent the perturbing signal, i.e., the aerosol reflectance in the near infrared can be determined from measurements in the shortwave infrared where the ocean is black. This gives access to the marine signal in the near infrared, which can then be used in the classic atmospheric correction algorithm. The methodology is evaluated theoretically from simulations of the top-of-atmosphere reflectance for a wide range of geophysical conditions and angular geometries and applied to actual MODIS imagery acquired over the Gulf of Mexico. The number of discarded pixels is reduced by over 80% using the PC modeling to determine the marine signal in the near infrared prior to applying the classic atmospheric correction algorithm.
Lee, Ho; Fahimian, Benjamin P; Xing, Lei
2017-03-21
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method's performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
NASA Astrophysics Data System (ADS)
Lee, Ho; Fahimian, Benjamin P.; Xing, Lei
2017-03-01
This paper proposes a binary moving-blocker (BMB)-based technique for scatter correction in cone-beam computed tomography (CBCT). In concept, a beam blocker consisting of lead strips, mounted in front of the x-ray tube, moves rapidly in and out of the beam during a single gantry rotation. The projections are acquired in alternating phases of blocked and unblocked cone beams, where the blocked phase results in a stripe pattern in the width direction. To derive the scatter map from the blocked projections, 1D B-Spline interpolation/extrapolation is applied by using the detected information in the shaded regions. The scatter map of the unblocked projections is corrected by averaging two scatter maps that correspond to their adjacent blocked projections. The scatter-corrected projections are obtained by subtracting the corresponding scatter maps from the projection data and are utilized to generate the CBCT image by a compressed-sensing (CS)-based iterative reconstruction algorithm. Catphan504 and pelvis phantoms were used to evaluate the method’s performance. The proposed BMB-based technique provided an effective method to enhance the image quality by suppressing scatter-induced artifacts, such as ring artifacts around the bowtie area. Compared to CBCT without a blocker, the spatial nonuniformity was reduced from 9.1% to 3.1%. The root-mean-square error of the CT numbers in the regions of interest (ROIs) was reduced from 30.2 HU to 3.8 HU. In addition to high resolution, comparable to that of the benchmark image, the CS-based reconstruction also led to a better contrast-to-noise ratio in seven ROIs. The proposed technique enables complete scatter-corrected CBCT imaging with width-truncated projections and allows reducing the acquisition time to approximately half. This work may have significant implications for image-guided or adaptive radiation therapy, where CBCT is often used.
Comets as natural laboratories: Interpretations of the structure of the inner heliosphere
NASA Astrophysics Data System (ADS)
Ramanjooloo, Yudish; Jones, Geraint H.; Coates, Andrew J.; Owens, Mathew J.
2015-11-01
Much has been learnt about the heliosphere’s structure from in situ solar wind spacecraft observations. Their coverage is however limited in time and space. Comets can be considered to be natural laboratories of the inner heliosphere, as their ion tails trace the solar wind flow. Solar wind conditions influence comets’ induced magnetotails, formed through the draping of the heliospheric magnetic field by the velocity shear in the mass-loaded solar wind.I present a novel imaging technique and software to exploit the vast catalogues of amateur and professional images of comet ion tails. My projection technique uses the comet’s orbital plane to sample its ion tail as a proxy for determining multi-latitudinal radial solar wind velocities in each comet’s vicinity. Making full use of many observing stations from astrophotography hobbyists to professional observatories and spacecraft, this approach is applied to several comets observed in recent years. This work thus assesses the validity of analysing comets’ ion tails as complementary sources of information on dynamical heliospheric phenomena and the underlying continuous solar wind.Complementary velocities, measured from folding ion rays and a velocity profile map built from consecutive images, are derived as an alternative means of quantifying the solar wind-cometary ionosphere interaction, including turbulent transient phenomena such as coronal mass ejections. I review the validity of these techniques by comparing near-Earth comets to solar wind MHD models (ENLIL) in the inner heliosphere and extrapolated measurements by ACE to the orbit of comet C/2004 Q2 (Machholz), a near-Earth comet. My radial velocities are mapped back to the solar wind source surface to identify sources of the quiescent solar wind and heliospheric current sheet crossings. Comets were found to be good indicators of solar wind structure, but the quality of results is strongly dependent on the observing geometry.
2018-04-01
EXTRAPOLATION OF HIGH -TEMPERATURE DATA ECBC-TR-1507 Ann Brozena Patrice Abercrombie-Thomas RESEARCH AND TECHNOLOGY DIRECTORATE David E. Tevault...Compounds, CMMP, DPMP, DMEP, and DEEP: Extrapolation of High - Temperature Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR...22060-6201 10. SPONSOR/MONITOR’S ACRONYM(S) DTRA 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved
2014-01-01
Background The indocyanine green dilution method is one of the methods available to estimate plasma volume, although some researchers have questioned the accuracy of this method. Methods We developed a new, physiologically based mathematical model of indocyanine green kinetics that more accurately represents indocyanine green kinetics during the first few minutes postinjection than what is assumed when using the traditional mono-exponential back-extrapolation method. The mathematical model is used to develop an optimal back-extrapolation method for estimating plasma volume based on simulated indocyanine green kinetics obtained from the physiological model. Results Results from a clinical study using the indocyanine green dilution method in 36 subjects with type 2 diabetes indicate that the estimated plasma volumes are considerably lower when using the traditional back-extrapolation method than when using the proposed back-extrapolation method (mean (standard deviation) plasma volume = 26.8 (5.4) mL/kg for the traditional method vs 35.1 (7.0) mL/kg for the proposed method). The results obtained using the proposed method are more consistent with previously reported plasma volume values. Conclusions Based on the more physiological representation of indocyanine green kinetics and greater consistency with previously reported plasma volume values, the new back-extrapolation method is proposed for use when estimating plasma volume using the indocyanine green dilution method. PMID:25052018
Bonsa, Anne-Marie; Paschek, Dietmar; Zaitsau, Dzmitry H; Emel'yanenko, Vladimir N; Verevkin, Sergey P; Ludwig, Ralf
2017-05-19
Key properties for the use of ionic liquids as electrolytes in batteries are low viscosities, low vapor pressure and high vaporization enthalpies. Whereas the measurement of transport properties is well established, the determination of vaporization enthalpies of these extremely low volatile compounds is still a challenge. At a first glance both properties seem to describe different thermophysical phenomena. However, eighty years ago Eyring suggested a theory which related viscosities and vaporization enthalpies to each other. The model is based on Eyring's theory of absolute reaction rates. Recent attempts to apply Eyring's theory to ionic liquids failed. The motivation of our study is to show that Eyring's theory works, if the assumptions specific for ionic liquids are fulfilled. For that purpose we measured the viscosities of three well selected protic ionic liquids (PILs) at different temperatures. The temperature dependences of viscosities were approximated by the Vogel-Fulcher-Tamann (VFT) relation and extrapolated to the high-temperature regime up to 600 K. Then the VFT-data could be fitted to the Eyring-model. The values of vaporization enthalpies for the three selected PILs predicted by the Eyring model have been very close to the experimental values measured by well-established techniques. We conclude that the Eyring theory can be successfully applied to the chosen set of PILs, if the assumption that ionic pairs of the viscous flow in the liquid and the ionic pairs in the gas phase are similar is fulfilled. It was also noticed that proper transfer of energies can be only derived if the viscosities and the vaporization energies are known for temperatures close to the liquid-gas transition temperature. The idea to correlate easy measurable viscosities of ionic liquids with their vaporization enthalpies opens a new way for a reliable assessment of these thermodynamic properties for a broad range of ionic liquids. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Havasi, Ágnes; Kazemi, Ehsan
2018-04-01
In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.
Magnetic field extrapolation with MHD relaxation using AWSoM
NASA Astrophysics Data System (ADS)
Shi, T.; Manchester, W.; Landi, E.
2017-12-01
Coronal mass ejections are known to be the major source of disturbances in the solar wind capable of affecting geomagnetic environments. In order for accurate predictions of such space weather events, a data-driven simulation is needed. The first step towards such a simulation is to extrapolate the magnetic field from the observed field that is only at the solar surface. Here we present results of a new code of magnetic field extrapolation with direct magnetohydrodynamics (MHD) relaxation using the Alfvén Wave Solar Model (AWSoM) in the Space Weather Modeling Framework. The obtained field is self-consistent with our model and can be used later in time-dependent simulations without modifications of the equations. We use the Low and Lou analytical solution to test our results and they reach a good agreement. We also extrapolate the magnetic field from the observed data. We then specify the active region corona field with this extrapolation result in the AWSoM model and self-consistently calculate the temperature of the active region loops with Alfvén wave dissipation. Multi-wavelength images are also synthesized.
NASA Technical Reports Server (NTRS)
Darden, C. M.
1984-01-01
A method for analyzing shock coalescence which includes three dimensional effects was developed. The method is based on an extension of the axisymmetric solution, with asymmetric effects introduced through an additional set of governing equations, derived by taking the second circumferential derivative of the standard shock equations in the plane of symmetry. The coalescence method is consistent with and has been combined with a nonlinear sonic boom extrapolation program which is based on the method of characteristics. The extrapolation program, is able to extrapolate pressure signatures which include embedded shocks from an initial data line in the plane of symmetry at approximately one body length from the axis of the aircraft to the ground. The axisymmetric shock coalescence solution, the asymmetric shock coalescence solution, the method of incorporating these solutions into the extrapolation program, and the methods used to determine spatial derivatives needed in the coalescence solution are described. Results of the method are shown for a body of revolution at a small, positive angle of attack.
Integration of culture and biology in human development.
Mistry, Jayanthi
2013-01-01
The challenge of integrating biology and culture is addressed in this chapter by emphasizing human development as involving mutually constitutive, embodied, and epigenetic processes. Heuristically rich constructs extrapolated from cultural psychology and developmental science, such as embodiment, action, and activity, are presented as promising approaches to the integration of cultural and biology in human development. These theoretical notions are applied to frame the nascent field of cultural neuroscience as representing this integration of culture and biology. Current empirical research in cultural neuroscience is then synthesized to illustrate emerging trends in this body of literature that examine the integration of biology and culture.
NASA Astrophysics Data System (ADS)
Güleçyüz, M. Ç.; Şenyiğit, M.; Ersoy, A.
2018-01-01
The Milne problem is studied in one speed neutron transport theory using the linearly anisotropic scattering kernel which combines forward and backward scatterings (extremely anisotropic scattering) for a non-absorbing medium with specular and diffuse reflection boundary conditions. In order to calculate the extrapolated endpoint for the Milne problem, Legendre polynomial approximation (PN method) is applied and numerical results are tabulated for selected cases as a function of different degrees of anisotropic scattering. Finally, some results are discussed and compared with the existing results in literature.
Glauber exchange amplitudes. [electron scattering from H atoms
NASA Technical Reports Server (NTRS)
Madan, R. N.
1975-01-01
The extrapolation method of Ochkur, valid for intermediate energies (about 50 eV), is applied to the exchange form of the Glauber amplitudes. In the case of elastic scattering of electrons from hydrogen atoms at 54.4 Ev the 'post' and 'prior' forms of the exchange amplitude are equivalent, whereas for the case of inelastic scattering there is a minute discrepancy between the two forms of the amplitude. The results are compared with the close-coupling calculation. The investigation is expected to be useful for optically forbidden exchange-allowed transitions due to electron impact at intermediate energies.
Expert elicitation of population-level effects of disturbance
Fleishman, Erica; Burgman, Mark; Runge, Michael C.; Schick, Robert S; Krauss, Scott; Popper, Arthur N.; Hawkins, Anthony
2016-01-01
Expert elicitation is a rigorous method for synthesizing expert knowledge to inform decision making and is reliable and practical when field data are limited. We evaluated the feasibility of applying expert elicitation to estimate population-level effects of disturbance on marine mammals. Diverse experts estimated parameters related to mortality and sublethal injury of North Atlantic right whales (Eubalaena glacialis). We are now eliciting expert knowledge on the movement of right whales among geographic regions to parameterize a spatial model of health. Expert elicitation complements methods such as simulation models or extrapolations from other species, sometimes with greater accuracy and less uncertainty.
Analysis of neutral beam driven impurity flow reversal in PLT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, M.A.; Stacey, W.M. Jr.; Thomas, C.E.
1986-10-01
The Stacey-Sigmar impurity transport theory for tokamak plasmas is applied to the analysis of experimental data from the PLT tokamak with a tungsten limiter. The drag term, which is a central piece in the theory, is evaluated from the recently developed gyroviscous theory for radial momentum transfer. An effort is made to base the modeling of the experiment on measured quantities. Where measured data is not available, recourse is made to extrapolation or numerical modeling. The theoretical and the experimental tungsten fluxes are shown to agree very closely within the uncertainties of the experimental data.
Nilsson, David; Wellington-Boyd, Anna
2006-01-01
This article presents an overview of outcomes from the Mount Sinai Leadership Enhancement Program as identified by previous program participants from Melbourne, Australia. These are categorised into: (1) Personal/professional, (2) Intra-organisational, (3) Interorganisational, and (4) International outcomes. Two illustrative examples are provided of international outcomes demonstrating how the ongoing commitment of Professor Epstein has extended and embedded the principles of practice-based research in Melbourne, and how the over-riding principles of the program have been applied by participants in establishing collaborative relationships with colleagues in our neighbouring South-East Asian region.
Extrapolation methods for vector sequences
NASA Technical Reports Server (NTRS)
Smith, David A.; Ford, William F.; Sidi, Avram
1987-01-01
This paper derives, describes, and compares five extrapolation methods for accelerating convergence of vector sequences or transforming divergent vector sequences to convergent ones. These methods are the scalar epsilon algorithm (SEA), vector epsilon algorithm (VEA), topological epsilon algorithm (TEA), minimal polynomial extrapolation (MPE), and reduced rank extrapolation (RRE). MPE and RRE are first derived and proven to give the exact solution for the right 'essential degree' k. Then, Brezinski's (1975) generalization of the Shanks-Schmidt transform is presented; the generalized form leads from systems of equations to TEA. The necessary connections are then made with SEA and VEA. The algorithms are extended to the nonlinear case by cycling, the error analysis for MPE and VEA is sketched, and the theoretical support for quadratic convergence is discussed. Strategies for practical implementation of the methods are considered.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
On the recovery of missing low and high frequency information from bandlimited reflectivity data
NASA Astrophysics Data System (ADS)
Sacchi, M. D.; Ulrych, T. J.
2007-12-01
During the last two decades, an important effort in the seismic exploration community has been made to retrieve broad-band seismic data by means of deconvolution and inversion. In general, the problem can be stated as a spectral reconstruction problem. In other words, given limited spectral information about the earth's reflectivity sequence, one attempts to create a broadband estimate of the Fourier spectra of the unknown reflectivity. Techniques based on the principle of parsimony can be effectively used to retrieve a sparse spike sequence and, consequently, a broad band signal. Alternatively, continuation methods, e.g., autoregressive modeling, can be used to extrapolate the recorded bandwidth of the seismic signal. The goal of this paper is to examine under what conditions the recovery of low and high frequencies from band-limited and noisy signals is possible. At the heart of the methods we discuss, is the celebrated non-Gaussian assumption so important in many modern signal processing methods, such as ICA, for example. Spectral recovery from limited information tends to work when the reflectivity consist of a few well isolated events. Results degrade with the number of reflectors, decreasing SNR and decreasing bandwidth of the source wavelet. Constrains and information-based priors can be used to stabilize the recovery but, as in all inverse problems, the solution is nonunique and effort is required to understand the level of recovery that is achievable, always keeping the physics of the problem in mind. We provide in this paper, a survey of methods to recover broad-band reflectivity sequences and examine the role that these techniques can play in the processing and inversion as applied to exploration and global seismology.
Li, Y; Chappell, A; Nyamdavaa, B; Yu, H; Davaasuren, D; Zoljargal, K
2015-03-01
The (137)Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many (137)Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of (137)Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954-2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate (137)Cs-derived net soil redistribution across scales of variation. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Susino, Roberto; Bemporad, Alessandro; Dolei, Sergio, E-mail: susino@oato.inaf.it, E-mail: sdo@oact.inaf.it
2014-07-20
A three-dimensional (3D) reconstruction of the 2007 May 20 partial-halo coronal mass ejection (CME) has been made using STEREO/EUVI and STEREO/COR1 coronagraphic images. The trajectory and kinematics of the erupting filament have been derived from Extreme Ultraviolet Imager (EUVI) image pairs with the 'tie-pointing' triangulation technique, while the polarization ratio technique has been applied to COR1 data to determine the average position and depth of the CME front along the line of sight. This 3D geometrical information has been combined for the first time with spectroscopic measurements of the O VI λλ1031.91, 1037.61 line profiles made with the Ultraviolet Coronagraphmore » Spectrometer (UVCS) on board the Solar and Heliospheric Observatory. Comparison between the prominence trajectory extrapolated at the altitude of UVCS observations and the core transit time measured from UVCS data made possible a firm identification of the CME core observed in white light and UV with the prominence plasma expelled during the CME. Results on the 3D structure of the CME front have been used to calculate synthetic spectral profiles of the O VI λ1031.91 line expected along the UVCS slit, in an attempt to reproduce the measured line widths. Observed line widths can be reproduced within the uncertainties only in the peripheral part of the CME front; at the front center, where the distance of the emitting plasma from the plane of the sky is greater, synthetic widths turn out to be ∼25% lower than the measured ones. This provides strong evidence of line broadening due to plasma heating mechanisms in addition to bulk expansion of the emitting volume.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koch, D.; Fertitta, E.; Paulus, B.
Due to the importance of both static and dynamical correlation in the bond formation, low-dimensional beryllium systems constitute interesting case studies to test correlation methods. Aiming to describe the whole dissociation curve of extended Be systems we chose to apply the method of increments (MoI) in its multireference (MR) formalism. To gain insight into the main characteristics of the wave function, we started by focusing on the description of small Be chains using standard quantum chemical methods. In a next step we applied the MoI to larger beryllium systems, starting from the Be{sub 6} ring. The complete active space formalismmore » was employed and the results were used as reference for local MR calculations of the whole dissociation curve. Although this is a well-established approach for systems with limited multireference character, its application regarding the description of whole dissociation curves requires further testing. Subsequent to the discussion of the role of the basis set, the method was finally applied to larger rings and extrapolated to an infinite chain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Lianjie
Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Resultsmore » from various data input to the method indicate significant improvements are provided in both image quality and resolution.« less
Large-cell Monte Carlo renormalization of irreversible growth processes
NASA Technical Reports Server (NTRS)
Nakanishi, H.; Family, F.
1985-01-01
Monte Carlo sampling is applied to a recently formulated direct-cell renormalization method for irreversible, disorderly growth processes. Large-cell Monte Carlo renormalization is carried out for various nonequilibrium problems based on the formulation dealing with relative probabilities. Specifically, the method is demonstrated by application to the 'true' self-avoiding walk and the Eden model of growing animals for d = 2, 3, and 4 and to the invasion percolation problem for d = 2 and 3. The results are asymptotically in agreement with expectations; however, unexpected complications arise, suggesting the possibility of crossovers, and in any case, demonstrating the danger of using small cells alone, because of the very slow convergence as the cell size b is extrapolated to infinity. The difficulty of applying the present method to the diffusion-limited-aggregation model, is commented on.
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards.
NASA Astrophysics Data System (ADS)
Bellon, Aldo; Zawadzki, Isztar; Kilambi, Alamelu; Lee, Hee Choon; Lee, Yong Hee; Lee, Gyuwon
2010-08-01
A Variational Echo Tracking (VET) technique has been applied to four months of archived data from the South Korean radar network in order to examine the influence of the various user-selectable parameters on the skill of the resulting 20-min to 4-h nowcasts. The latter are computed over a (512 × 512) array at 2-km resolution. After correcting the original algorithm to take into account the motion of precipitation across the boundaries of such a smaller radar network, we concluded that the set of default input parameters initially assumed is very close to the optimum combination. Decreasing to (5 sx 5) or increasing to (50 × 50) the default vector density of (25 × 25), using two or three maps for velocity determination, varying the relative weights for the constraints of conservation of reflectivity and of the smoothing of the velocity vectors, and finally the application of temporal smoothing all had only marginal effects on the skill of the forecasts. The relatively small sensitivity to significant variations of the VET default parameters is a direct consequence of the fact that the major source of the loss in forecast skill cannot be attributed to errors in the forecast motion, but to the unpredictable nature of the storm growth and decay. Changing the time interval between maps, from 20 to 10 minutes, and significantly increasing the reflectivity threshold from 15 to 30 dBZ had a more noticeable reduction on the forecast skill. Comparisons with the Eulerian "zero velocity" forecast and with a "single" vector forecast have also been performed in order to determine the accrued skill of the VET algorithm. Because of the extensive stratiform nature of the precipitation areas affecting the Korean peninsula, the increased skill is not as large as may have been anticipated. This can be explained by the greater extent of the precipitation systems relative to the size of the radar coverage domain.
Extrapolation of operators acting into quasi-Banach spaces
NASA Astrophysics Data System (ADS)
Lykov, K. V.
2016-01-01
Linear and sublinear operators acting from the scale of L_p spaces to a certain fixed quasinormed space are considered. It is shown how the extrapolation construction proposed by Jawerth and Milman at the end of 1980s can be used to extend a bounded action of an operator from the L_p scale to wider spaces. Theorems are proved which generalize Yano's extrapolation theorem to the case of a quasinormed target space. More precise results are obtained under additional conditions on the quasinorm. Bibliography: 35 titles.
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.
Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan
2005-08-01
This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).
Delle Monache, Sergio; Lacquaniti, Francesco; Bosco, Gianfranco
2017-09-01
The ability to catch objects when transiently occluded from view suggests their motion can be extrapolated. Intraparietal cortex (IPS) plays a major role in this process along with other brain structures, depending on the task. For example, interception of objects under Earth's gravity effects may depend on time-to-contact predictions derived from integration of visual signals processed by hMT/V5+ with a priori knowledge of gravity residing in the temporoparietal junction (TPJ). To investigate this issue further, we disrupted TPJ, hMT/V5+, and IPS activities with transcranial magnetic stimulation (TMS) while subjects intercepted computer-simulated projectile trajectories perturbed randomly with either hypo- or hypergravity effects. In experiment 1 , trajectories were occluded either 750 or 1,250 ms before landing. Three subject groups underwent triple-pulse TMS (tpTMS, 3 pulses at 10 Hz) on one target area (TPJ | hMT/V5+ | IPS) and on the vertex (control site), timed at either trajectory perturbation or occlusion. In experiment 2 , trajectories were entirely visible and participants received tpTMS on TPJ and hMT/V5+ with same timing as experiment 1 tpTMS of TPJ, hMT/V5+, and IPS affected differently the interceptive timing. TPJ stimulation affected preferentially responses to 1-g motion, hMT/V5+ all response types, and IPS stimulation induced opposite effects on 0-g and 2-g responses, being ineffective on 1-g responses. Only IPS stimulation was effective when applied after target disappearance, implying this area might elaborate memory representations of occluded target motion. Results are compatible with the idea that IPS, TPJ, and hMT/V5+ contribute to distinct aspects of visual motion extrapolation, perhaps through parallel processing. NEW & NOTEWORTHY Visual extrapolation represents a potential neural solution to afford motor interactions with the environment in the face of missing information. We investigated relative contributions by temporoparietal junction (TPJ), hMT/V5+, and intraparietal cortex (IPS), cortical areas potentially involved in these processes. Parallel organization of visual extrapolation processes emerged with respect to the target's motion causal nature: TPJ was primarily involved for visual motion congruent with gravity effects, IPS for arbitrary visual motion, whereas hMT/V5+ contributed at earlier processing stages. Copyright © 2017 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Köchl, F.; Loarte, A.; de la Luna, E.; Parail, V.; Corrigan, G.; Harting, D.; Nunes, I.; Reux, C.; Rimini, F. G.; Polevoi, A.; Romanelli, M.; Contributors, JET
2018-07-01
Tokamak operation with W PFCs is associated with specific challenges for impurity control, which may be particularly demanding in the transition from stationary H-mode to L-mode. To address W control issues in this phase, dedicated experiments have been performed at JET including the variation of the decrease of the power and current, gas fuelling and central ion cyclotron heating (ICRH), and applying active ELM control by vertical kicks. The experimental results obtained demonstrate the key role of maintaining ELM control to control the W concentration in the exit phase of H-modes with slow (ITER-like) ramp-down of the neutral beam injection power in JET. For these experiments, integrated fully predictive core+edge+SOL transport modelling studies applying discrete models for the description of transients such as sawteeth and ELMs have been performed for the first time with the JINTRAC suite of codes for the entire transition from stationary H-mode until the time when the plasma would return to L-mode focusing on the W transport behaviour. Simulations have shown that the existing models can appropriately reproduce the plasma profile evolution in the core, edge and SOL as well as W accumulation trends in the termination phase of JET H-mode discharges as function of the applied ICRH and ELM control schemes, substantiating the ambivalent effect of ELMs on W sputtering on one side and on edge transport affecting core W accumulation on the other side. The sensitivity with respect to NB particle and momentum sources has also been analysed and their impact on neoclassical W transport has been found to be crucial to reproduce the observed W accumulation characteristics in JET discharges. In this paper the results of the JET experiments, the comparison with JINTRAC modelling and the adequacy of the models to reproduce the experimental results are described and conclusions are drawn regarding the applicability of these models for the extrapolation of the applied W accumulation control techniques to ITER.
An analysis of the nucleon spectrum from lattice partially-quenched QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. Armour; Allton, C. R.; Leinweber, Derek B.
2010-09-01
The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value ofmore » Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.« less
The Airborne Measurements of Methane Fluxes (AIRMETH) Arctic Campaign (Invited)
NASA Astrophysics Data System (ADS)
Serafimovich, A.; Metzger, S.; Hartmann, J.; Kohnert, K.; Sachs, T.
2013-12-01
One of the most pressing questions with regard to climate feedback processes in a warming Arctic is the regional-scale methane release from Arctic permafrost areas. The Airborne Measurements of Methane Fluxes (AIRMETH) campaign is designed to quantitatively and spatially explicitly address this question. Ground-based eddy covariance (EC) measurements provide continuous in-situ observations of the surface-atmosphere exchange of methane. However, these observations are rare in the Arctic permafrost zone and site selection is bound by logistical constraints among others. Consequently, these observations cover only small areas that are not necessarily representative of the region of interest. Airborne measurements can overcome this limitation by covering distances of hundreds of kilometers over time periods of a few hours. Here, we present the potential of environmental response functions (ERFs) for quantitatively linking methane flux observations in the atmospheric surface layer to meteorological and biophysical drivers in the flux footprints. For this purpose thousands of kilometers of AIRMETH data across the Alaskan North Slope are utilized, with the aim to extrapolate the airborne EC methane flux observations to the entire North Slope. The data were collected aboard the research aircraft POLAR 5, using its turbulence nose boom and fast response methane and meteorological sensors. After thorough data pre-processing, Reynolds averaging is used to derive spatially integrated fluxes. To increase spatial resolution and to derive ERFs, we then use wavelet transforms of the original high-frequency data. This enables much improved spatial discretization of the flux observations, and the quantification of continuous and biophysically relevant land cover properties in the flux footprint of each observation. A machine learning technique is then employed to extract and quantify the functional relationships between the methane flux observations and the meteorological and biophysical drivers in the flux footprints. Lastly, the resulting ERFs are used to extrapolate the methane release over spatio-temporally explicit grids of the Alaskan North Slope. Metzger et al. (2013) have demonstrated the efficacy of this technique for regionalizing airborne EC heat flux observations to within an accuracy of ≤18% and a precision of ≤5%. Here, we show for the first time results from applying the ERF procedure to airborne methane EC measurements, and report its potential for spatio-temporally explicit inventories of the regional-scale methane exchange. References: Metzger, S., Junkermann, W., Mauder, M., Butterbach-Bahl, K., Trancón y Widemann, B., Neidl, F., Schäfer, K., Wieneke, S., Zheng, X. H., Schmid, H. P., and Foken, T.: Spatially explicit regionalization of airborne flux measurements using environmental response functions, Biogeosciences, 10, 2193-2217, doi:10.5194/bg-10-2193-2013, 2013.
NASA Astrophysics Data System (ADS)
Milke, R.; Dohmen, R.; Wiedenbeck, M.; Wirth, R.; Abart, R.; Becker, H.-W.
2003-04-01
Grain boundary diffusion studies by the rim growth method in the system MgO(±FeO)-SiO_2 have evolved from measuring rim growth rates to the tracing of chemical components by using isotopically enriched starting materials and SIMS analyses (Milke et al. 2001). We miniaturized this setup for grain boundary diffusion experiments by using pulsed-laser deposited (PLD) thin films (Dohmen et al. 2002). The starting samples consist of polycrystalline layers of pyroxene (en90fs10) and isotopically doped (18O, 29Si) olivine (fo90fa10) with a total thickness <= 1 μm on a polished quartz surface. A first series of experiments was performed at temperatures between 1000 and 1200^oC at fO_2 of 10-10 bar. Resulting layer thickness and chemi-cal composition were measured by Rutherford Back-Scattering (RBS) and TEM using Focused Ion Beam (FIB) preparation methods. O and Si isotope profiles were measured by SIMS depth scanning. The enstatite layers thicken during the annealing experiments with well-defined interfaces by rates for Δx^2 of 700 to 50000 nm^2/h at the chosen conditions. The iso-tope profiles show that Si acts as a slow diffusing component. From the enstatite growth rates a Dgb_Aδ can be calculated, where A is the rate-determining component. This gives a Dgb_Aδ in the range of 10-26 (at 1000^oC) to 10-24 (at 1200^oC) m^3s-1, which is well in accordance with an extrapolation from the data of Fisler et al. (1997) at 1350 to 1450^oC. This indicates that over the entire interval from 1000 to 1450^oC the reaction is controlled by diffusion of the same component and more importantly that mechanisms on the nano scale are the same as on the microscopic scale. The new method has several advantages over previously used techniques. The well-defined layers on nano scale allow one to study rim growth at lower temperatures than before and avoids therefore large extrapolations to natural conditions. The very small amount of isotopically enriched material needed for one sample makes it also economically viable. The samples can be designed with variable chemical composi-tions, e.g. distinct members of the fo-fa and en-fs series. The versatility of the PLD-technique allows one to apply this method to other chemical systems as well. Ref.: Dohmen et al. (2002) Eur J Miner 14: 1155--1168; Milke et al. (2001), Contrib Miner Petrol 142: 15--26; Fisler et al. (1997) Phys Chem Minerals 24: 264--273.
Jürimäe, Jaak; Haljaste, Kaja; Cicchella, Antonio; Lätt, Evelin; Purge, Priit; Leppik, Aire; Jürimäe, Toivo
2007-02-01
The purpose of this study was to examine the influence of the energy cost of swimming, body composition, and technical parameters on swimming performance in young swimmers. Twenty-nine swimmers, 15 prepubertal (11.9 +/- 0.3 years; Tanner Stages 1-2) and 14 pubertal (14.3 +/- 1.4 years; Tanner Stages 3-4) boys participated in the study. The energy cost of swimming (Cs) and stroking parameters were assessed over maximal 400-m front-crawl swimming in a 25-m swimming pool. The backward extrapolation technique was used to evaluate peak oxygen consumption (VO2peak). A stroke index (SI; m2 . s(-1) . cycles(-1)) was calculated by multiplying the swimming speed by the stroke length. VO2peak results were compared with VO2peak test in the laboratory (bicycle, 2.86 +/- 0.74 L/min, vs. in water, 2.53 +/- 0.50 L/min; R2 = .713; p = .0001). Stepwise-regression analyses revealed that SI (R2 = .898), in-water VO2peak (R2 = .358), and arm span (R2 = .454) were the best predictors of swimming performance. The backward-extrapolation method could be used to assess VO2peak in young swimmers. SI, arm span, and VO2peak appear to be the major determinants of front-crawl swimming performance in young swimmers.
Morrison, James P; Sharma, Alok K; Rao, Deepa; Pardo, Ingrid D; Garman, Robert H; Kaufmann, Wolfgang; Bolon, Brad
2015-01-01
A half-day Society of Toxicologic Pathology continuing education course on "Fundamentals of Translational Neuroscience in Toxicologic Pathology" presented some current major issues faced when extrapolating animal data regarding potential neurological consequences to assess potential human outcomes. Two talks reviewed functional-structural correlates in rodent and nonrodent mammalian brains needed to predict behavioral consequences of morphologic changes in discrete neural cell populations. The third lecture described practical steps for ensuring that specimens from rodent developmental neurotoxicity tests will be processed correctly to produce highly homologous sections. The fourth talk detailed demographic factors (e.g., species, strain, sex, and age); physiological traits (body composition, brain circulation, pharmacokinetic/pharmacodynamic patterns, etc.); and husbandry influences (e.g., group housing) known to alter the effects of neuroactive agents. The last presentation discussed the appearance, unknown functional effects, and potential relevance to humans of polyethylene glycol (PEG)-associated vacuoles within the choroid plexus epithelium of animals. Speakers provided real-world examples of challenges with data extrapolation among species or with study design considerations that may impact the interpretability of results. Translational neuroscience will be bolstered in the future as less invasive and/or more quantitative techniques are devised for linking overt functional deficits to subtle anatomic and chemical lesions. © 2014 by The Author(s).
Incorporating Human Interindividual Biotransformation ...
The protection of sensitive individuals within a population dictates that measures other than central tendencies be employed to estimate risk. The refinement of human health risk assessments for chemicals metabolized by the liver to reflect data on human variability can be accomplished through (1) the characterization of enzyme expression in large banks of human liver samples, (2) the employment of appropriate techniques for the quantification and extrapolation of metabolic rates derived in vitro, and (3) the judicious application of physiologically based pharmacokinetic (PBPK) modeling. While in vitro measurements of specific biochemical reactions from multiple human samples can yield qualitatively valuable data on human variance, such measures must be put into the perspective of the intact human to yield the most valuable predictions of metabolic differences among humans. For quantitative metabolism data to be the most valuable in risk assessment, they must be tied to human anatomy and physiology, and the impact of their variance evaluated under real exposure scenarios. For chemicals metabolized in the liver, the concentration of parent chemical in the liver represents the substrate concentration in the MichaelisMenten description of metabolism. Metabolic constants derived in vitro may be extrapolated to the intact liver, when appropriate conditions are met. Metabolic capacity Vmax; the maximal rate of the reaction) can be scaled directly to the concentration
Matrix elements of Δ B =0 operators in heavy hadron chiral perturbation theory
NASA Astrophysics Data System (ADS)
Lee, Jong-Wan
2015-05-01
We study the light-quark mass and spatial volume dependence of the matrix elements of Δ B =0 four-quark operators relevant for the determination of Vu b and the lifetime ratios of single-b hadrons. To this end, one-loop diagrams are computed in the framework of heavy hadron chiral perturbation theory with partially quenched formalism for three light-quark flavors in the isospin limit; flavor-connected and -disconnected diagrams are carefully analyzed. These calculations include the leading light-quark flavor and heavy-quark spin symmetry breaking effects in the heavy hadron spectrum. Our results can be used in the chiral extrapolation of lattice calculations of the matrix elements to the physical light-quark masses and to infinite volume. To provide insight on such chiral extrapolation, we evaluate the one-loop contributions to the matrix elements containing external Bd, Bs mesons and Λb baryon in the QCD limit, where sea and valence quark masses become equal. In particular, we find that the matrix elements of the λ3 flavor-octet operators with an external Bd meson receive the contributions solely from connected diagrams in which current lattice techniques are capable of precise determination of the matrix elements. Finite volume effects are at most a few percent for typical lattice sizes and pion masses.
Decomposition Technique for Remaining Useful Life Prediction
NASA Technical Reports Server (NTRS)
Saha, Bhaskar (Inventor); Goebel, Kai F. (Inventor); Saxena, Abhinav (Inventor); Celaya, Jose R. (Inventor)
2014-01-01
The prognostic tool disclosed here decomposes the problem of estimating the remaining useful life (RUL) of a component or sub-system into two separate regression problems: the feature-to-damage mapping and the operational conditions-to-damage-rate mapping. These maps are initially generated in off-line mode. One or more regression algorithms are used to generate each of these maps from measurements (and features derived from these), operational conditions, and ground truth information. This decomposition technique allows for the explicit quantification and management of different sources of uncertainty present in the process. Next, the maps are used in an on-line mode where run-time data (sensor measurements and operational conditions) are used in conjunction with the maps generated in off-line mode to estimate both current damage state as well as future damage accumulation. Remaining life is computed by subtracting the instance when the extrapolated damage reaches the failure threshold from the instance when the prediction is made.
I-V characterization of a quantum well infrared photodetector with stepped and graded barriers
NASA Astrophysics Data System (ADS)
Nutku, F.; Erol, A.; Gunes, M.; Buklu, L. B.; Ergun, Y.; Arikan, M. C.
2012-09-01
I-V characterization of an n-type quantum well infrared photodetector which consists of stepped and graded barriers has been done under dark at temperatures between 20-300 K. Different current transport mechanisms and transition between them have been observed at temperature around 47 K. Activation energies of the electrons at various bias voltages have been obtained from the temperature dependent I-V measurements. Activation energy at zero bias has been calculated by extrapolating the bias dependence of the activation energies. Ground state energies and barrier heights of the four different quantum wells have been calculated by using an iterative technique, which depends on experimentally obtained activation energy. Ground state energies also have been calculated with transfer matrix technique and compared with iteration results. Incorporating the effect of high electron density induced electron exchange interaction on ground state energies; more consistent results with theoretical transfer matrix calculations have been obtained.
Use of reciprocal lattice layer spacing in electron backscatter diffraction pattern analysis
Michael; Eades
2000-03-01
In the scanning electron microscope using electron backscattered diffraction, it is possible to measure the spacing of the layers in the reciprocal lattice. These values are of great use in confirming the identification of phases. The technique derives the layer spacing from the higher-order Laue zone rings which appear in patterns from many materials. The method adapts results from convergent-beam electron diffraction in the transmission electron microscope. For many materials the measured layer spacing compares well with the calculated layer spacing. A noted exception is for higher atomic number materials. In these cases an extrapolation procedure is described that requires layer spacing measurements at a range of accelerating voltages. This procedure is shown to improve the accuracy of the technique significantly. The application of layer spacing measurements in EBSD is shown to be of use for the analysis of two polytypes of SiC.
Tidal estimation in the Atlantic and Indian Oceans, 3 deg x 3 deg solution
NASA Technical Reports Server (NTRS)
Sanchez, Braulio V.; Rao, Desiraju B.; Steenrod, Stephen D.
1987-01-01
An estimation technique was developed to extrapolate tidal amplitudes and phases over entire ocean basins using existing gauge data and the altimetric measurements provided by satellite oceanography. The technique was previously tested. Some results obtained by using a 3 deg by 3 deg grid are presented. The functions used in the interpolation are the eigenfunctions of the velocity (Proudman functions) which are computed numerically from a knowledge of the basin's bottom topography, the horizontal plan form and the necessary boundary conditions. These functions are characteristic of the particular basin. The gravitational normal modes of the basin are computed as part of the investigation; they are used to obtain the theoretical forced solutions for the tidal constituents. The latter can provide the simulated data for the testing of the method and serve as a guide in choosing the most energetic functions for the interpolation.
An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.
Determining Data Quality for the NOvA Experiment
NASA Astrophysics Data System (ADS)
Murphy, Ryan; NOvA Collaboration Collaboration
2016-03-01
NOvA is a long-baseline neutrino oscillation experiment with two liquid scintillator filled tracking calorimeter detectors separated by 809 km. The detectors are located 14.6 milliradians off-axis of Fermilab's NuMI beam. The NOvA experiment is designed to measure the rate of electron-neutrino appearance out of the almost-pure muon-neutrino NuMI beam, with the data measured at the Near Detector being used to accurately determine the expected rate of the Far Detector. It is therefore very important to have automated and accurate monitoring of the data recorded by the detectors so any hardware, DAQ or beam issues arising in the 0.3 million (20k) channels of the far (near) detector which could effect this extrapolation technique are identified and the affected data removed from the physics analysis data set. This poster will cover the techniques and efficiency of selecting good data, describing the selections placed on different data and hardware levels.
Spatial and Temporal scales of time-averaged 700 MB height anomalies
NASA Technical Reports Server (NTRS)
Gutzler, D.
1981-01-01
The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.
Radiation Modeling with Direct Simulation Monte Carlo
NASA Technical Reports Server (NTRS)
Carlson, Ann B.; Hassan, H. A.
1991-01-01
Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.
Extreme Sea Conditions in Shallow Water: Estimation based on in-situ measurements
NASA Astrophysics Data System (ADS)
Le Crom, Izan; Saulnier, Jean-Baptiste
2013-04-01
The design of marine renewable energy devices and components is based, among others, on the assessment of the environmental extreme conditions (winds, currents, waves, and water level) that must be combined together in order to evaluate the maximal loads on a floating/fixed structure, and on the anchoring system over a determined return period. Measuring devices are generally deployed at sea over relatively short durations (a few months to a few years), typically when describing water free surface elevation, and extrapolation methods based on hindcast data (and therefore on wave simulation models) have to be used. How to combine, in a realistic way, the action of the different loads (winds and waves for instance) and which correlation of return periods should be used are highly topical issues. However, the assessment of the extreme condition itself remains a not-fully-solved, crucial, and sensitive task. Above all in shallow water, extreme wave height, Hmax, is the most significant contribution in the dimensioning process of EMR devices. As a case study, existing methodologies for deep water have been applied to SEMREV, the French marine energy test site. The interest of this study, especially at this location, goes beyond the simple application to SEMREV's WEC and floating wind turbines deployment as it could also be extended to the Banc de Guérande offshore wind farm that are planned close by. More generally to pipes and communication cables as it is a redundant problematic. The paper will first present the existing measurements (wave and wind on site), the prediction chain that has been developed via wave models, the extrapolation methods applied to hindcast data, and will try to formulate recommendations for improving this assessment in shallow water.
Application of a framework for extrapolating chemical effects ...
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetics, life-stage, and pathway similarities/differences. Here we propose a framework using a tiered approach for species extrapolation that enables a transparent weight-of-evidence driven evaluation of pathway conservation (or lack thereof) in the context of adverse outcome pathways. Adverse outcome pathways describe the linkages from a molecular initiating event, defined as the chemical-biomolecule interaction, through subsequent key events leading to an adverse outcome of regulatory concern (e.g., mortality, reproductive dysfunction). Tier 1 of the extrapolation framework employs in silico evaluations of sequence and structural conservation of molecules (e.g., receptors, enzymes) associated with molecular initiating events or upstream key events. Such evaluations make use of available empirical and sequence data to assess taxonomic relevance. Tier 2 uses in vitro bioassays, such as enzyme inhibition/activation, competitive receptor binding, and transcriptional activation assays to explore functional conservation of pathways across taxa. Finally, Tier 3 provides a comparative analysis of in vivo responses between species utilizing well-established model organisms to assess departure from
Properties of infrared extrapolations in a harmonic oscillator basis
Coon, Sidney A.; Kruse, Michael K. G.
2016-02-22
Here, the success and utility of effective field theory (EFT) in explaining the structure and reactions of few-nucleon systems has prompted the initiation of EFT-inspired extrapolations to larger model spaces in ab initio methods such as the no-core shell model (NCSM). In this contribution, we review and continue our studies of infrared (ir) and ultraviolet (uv) regulators of NCSM calculations in which the input is phenomenological NN and NNN interactions fitted to data. We extend our previous findings that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful,more » not only for the eigenstates of the Hamiltonian but also for expectation values of operators, such as r 2, considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a possible extrapolation of ground state energies in the uv cutoff when the ir cutoff is below the intrinsic ir scale is not robust and does not agree with the ir extrapolation of the same data or with independent calculations using other methods.« less
Fractal Dimensionality of Pore and Grain Volume of a Siliciclastic Marine Sand
NASA Astrophysics Data System (ADS)
Reed, A. H.; Pandey, R. B.; Lavoie, D. L.
Three-dimensional (3D) spatial distributions of pore and grain volumes were determined from high-resolution computer tomography (CT) images of resin-impregnated marine sands. Using a linear gradient extrapolation method, cubic three-dimensional samples were constructed from two-dimensional CT images. Image porosity (0.37) was found to be consistent with the estimate of porosity by water weight loss technique (0.36). Scaling of the pore volume (Vp) with the linear size (L), V~LD provides the fractal dimensionalities of the pore volume (D=2.74+/-0.02) and grain volume (D=2.90+/-0.02) typical for sedimentary materials.
NASA Technical Reports Server (NTRS)
Thompson, J. F.; Mcwhorter, J. C.; Siddiqi, S. A.; Shanks, S. P.
1973-01-01
Numerical methods of integration of the equations of motion of a controlled satellite under the influence of gravity-gradient torque are considered. The results of computer experimentation using a number of Runge-Kutta, multi-step, and extrapolation methods for the numerical integration of this differential system are presented, and particularly efficient methods are noted. A large bibliography of numerical methods for initial value problems for ordinary differential equations is presented, and a compilation of Runge-Kutta and multistep formulas is given. Less common numerical integration techniques from the literature are noted for further consideration.
Standardization and determination of the total internal conversion coefficient of In-111.
Matos, Izabela T; Koskinas, Marina F; Nascimento, Tatiane S; Yamazaki, Ione M; Dias, Mauro S
2014-05-01
The standardization of (111)In by means of a 4πβ-γ coincidence system, composed of a proportional counter in 4π geometry, coupled to a 20% relative efficiency HPGe crystal, for measuring gamma-rays is presented. The data acquisition was performed by means of the software coincidence system (SCS) and the activity was determined by the extrapolation technique. Two gamma-ray windows were selected: at 171 keV and 245 keV total absorption peaks, allowing the determination of the total internal conversion coefficient for these two gamma transitions. The results were compared with those available in the literature. © 2013 Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Medlyn, D.A.; Bilbey, S.A.
1993-04-01
The Upper Jurassic Morrison Formation has yielded one of the richest floras of the so-called transitional conifers'' of the Middle Mesozoic. Recently, a silicified axis of one of these conifers was collected from the Salt Wash member in essentially the same horizon as a previously reported partial Stegosaurus skeleton. In addition, two other axes of conifers were collected in the same immediate vicinity. Paleoecological considerations are extrapolated from the coniferous flora, vertebrate fauna and associated lithologies. Techniques of paleodendrology and relationships of extant/extinct environments are compared. The paleoclimatic conditions of the transitional conifers and associated dinosaurian fossils are postulated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujiki, K.; Tokumaru, M.; Hayashi, K.
We developed an automated prediction technique for coronal holes using potential magnetic field extrapolation in the solar corona to construct a database of coronal holes appearing from 1975 February to 2015 July (Carrington rotations from 1625 to 2165). Coronal holes are labeled with the location, size, and average magnetic field of each coronal hole on the photosphere and source surface. As a result, we identified 3335 coronal holes and found that the long-term distribution of coronal holes shows a similar pattern known as the magnetic butterfly diagram, and polar/low-latitude coronal holes tend to decrease/increase in the last solar minimum relativemore » to the previous two minima.« less
Hartley, Matt; Roberts, Helen
2015-09-01
Disease control management relies on the development of policy supported by an evidence base. The evidence base for disease in zoo animals is often absent or incomplete. Resources for disease research in these species are limited, and so in order to develop effective policies, novel approaches to extrapolating knowledge and dealing with uncertainty need to be developed. This article demonstrates how qualitative risk analysis techniques can be used to aid decision-making in circumstances in which there is a lack of specific evidence using the import of rabies-susceptible zoo mammals into the United Kingdom as a model.
System for the growth of bulk SiC crystals by modified CVD techniques
NASA Technical Reports Server (NTRS)
Steckl, Andrew J.
1994-01-01
The goal of this program was the development of a SiC CVD growth of films thick enough to be useful as pseudo-substrates. The cold-walled CVD system was designed, assembled, and tested. Extrapolating from preliminary evaluation of SiC films grown in the system at relatively low temperatures indicates that the growth rate at the final temperatures will be high enough to make our approach practical. Modifications of the system to allow high temperature growth and cleaner growth conditions are in progress. This program was jointly funded by Wright Laboratory, Materials Directorate and NASA LeRC and monitored by NASA.
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Arnold, James O. (Technical Monitor)
1999-01-01
The atomization energy of Mg4 is determined using the MP2 and CCSD(T) levels of theory. Basis set incompleteness, basis set extrapolation, and core-valence effects are discussed. Our best atomization energy, including the zero-point energy and scalar relativistic effects, is 24.6+/-1.6 kcal per mol. Our computed and extrapolated values are compared with previous results, where it is observed that our extrapolated MP2 value is good agreement with the MP2-R12 value. The CCSD(T) and MP2 core effects are found to have the opposite signs.
Carpooling: status and potential
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kendall, D.C.
1975-06-01
Studies were conducted to analyze the status and potential of work-trip carpooling as a means of achieving more efficient use of the automobile. Current and estimated maximum potential levels of carpooling are presented together with analyses revealing characteristics of carpool trips, incentives, impacts of increased carpooling and issues related to carpool matching services. National survey results indicate the average auto occupancy for urban work-trip is 1.2 passengers per auto. This value, and average carpool occupancy of 2.5, have been relatively stable over the last five years. An increase in work-trip occupancy from 1.2 to 1.8 would require a 100% increasemore » in the number of carpoolers. A model was developed to predict the maximum potential level of carpooling in an urban area. Results from applying the model to the Boston region were extrapolated to estimate a maximum nationwide potential between 47 and 71% of peak period auto commuters. Maximum benefits of increased carpooling include up to 10% savings in auto fuel consumption. A technique was developed for estimating the number of participants required in a carpool matching service to achieve a chosen level of matching among respondents, providing insight into tradeoffs between employer and regional or centralized matching services. Issues recommended for future study include incentive policies and their impacts on other modes, and the evaluation of new and ongoing carpool matching services. (11 references) (GRA)« less
Li, Xiaofan; Nie, Qing
2009-07-01
Many applications in materials involve surface diffusion of elastically stressed solids. Study of singularity formation and long-time behavior of such solid surfaces requires accurate simulations in both space and time. Here we present a high-order boundary integral method for an elastically stressed solid with axi-symmetry due to surface diffusions. In this method, the boundary integrals for isotropic elasticity in axi-symmetric geometry are approximated through modified alternating quadratures along with an extrapolation technique, leading to an arbitrarily high-order quadrature; in addition, a high-order (temporal) integration factor method, based on explicit representation of the mean curvature, is used to reduce the stability constraint on time-step. To apply this method to a periodic (in axial direction) and axi-symmetric elastically stressed cylinder, we also present a fast and accurate summation method for the periodic Green's functions of isotropic elasticity. Using the high-order boundary integral method, we demonstrate that in absence of elasticity the cylinder surface pinches in finite time at the axis of the symmetry and the universal cone angle of the pinching is found to be consistent with the previous studies based on a self-similar assumption. In the presence of elastic stress, we show that a finite time, geometrical singularity occurs well before the cylindrical solid collapses onto the axis of symmetry, and the angle of the corner singularity on the cylinder surface is also estimated.
Li, Miao; Gehring, Ronette; Riviere, Jim E; Lin, Zhoumeng
2017-09-01
Penicillin G is a widely used antimicrobial in food-producing animals, and one of the most predominant drug residues in animal-derived food products. Due to reduced sensitivity of bacteria to penicillin, extralabel use of penicillin G is common, which may lead to violative residues in edible tissues and cause adverse reactions in consumers. This study aimed to develop a physiologically based pharmacokinetic (PBPK) model to predict drug residues in edible tissues and estimate extended withdrawal intervals for penicillin G in swine and cattle. A flow-limited PBPK model was developed with data from Food Animal Residue Avoidance Databank using Berkeley Madonna. The model predicted observed drug concentrations in edible tissues, including liver, muscle, and kidney for penicillin G both in swine and cattle well, including data not used in model calibration. For extralabel use (5× and 10× label dose) of penicillin G, Monte Carlo sampling technique was applied to predict times needed for tissue concentrations to fall below established tolerances for the 99th percentile of the population. This model provides a useful tool to predict tissue residues of penicillin G in swine and cattle to aid food safety assessment, and also provide a framework for extrapolation to other food animal species. Copyright © 2017 Elsevier Ltd. All rights reserved.
Non-ideality by sedimentation velocity of halophilic malate dehydrogenase in complex solvents.
Solovyova, A; Schuck, P; Costenaro, L; Ebel, C
2001-01-01
We have investigated the potential of sedimentation velocity analytical ultracentrifugation for the measurement of the second virial coefficients of proteins, with the goal of developing a method that allows efficient screening of different solvent conditions. This may be useful for the study of protein crystallization. Macromolecular concentration distributions were modeled using the Lamm equation with the approximation of linear concentration dependencies of the diffusion constant, D = D(o) (1 + k(D)c), and the reciprocal sedimentation coefficient s = s(o)/(1 + k(s)c). We have studied model distributions for their information content with respect to the particle and its non-ideal behavior, developed a strategy for their analysis by direct boundary modeling, and applied it to data from sedimentation velocity experiments on halophilic malate dehydrogenase in complex aqueous solvents containing sodium chloride and 2-methyl-2,4-pentanediol, including conditions near phase separation. Using global modeling for three sets of data obtained at three different protein concentrations, very good estimates for k(s) and s degrees and also for D degrees and the buoyant molar mass were obtained. It was also possible to obtain good estimates for k(D) and the second virial coefficients. Modeling of sedimentation velocity profiles with the non-ideal Lamm equation appears as a good technique to investigate weak inter-particle interactions in complex solvents and also to extrapolate the ideal behavior of the particle. PMID:11566761
Projecting technology change to improve space technology planning and systems management
NASA Astrophysics Data System (ADS)
Walk, Steven Robert
2011-04-01
Projecting technology performance evolution has been improving over the years. Reliable quantitative forecasting methods have been developed that project the growth, diffusion, and performance of technology in time, including projecting technology substitutions, saturation levels, and performance improvements. These forecasts can be applied at the early stages of space technology planning to better predict available future technology performance, assure the successful selection of technology, and improve technology systems management strategy. Often what is published as a technology forecast is simply scenario planning, usually made by extrapolating current trends into the future, with perhaps some subjective insight added. Typically, the accuracy of such predictions falls rapidly with distance in time. Quantitative technology forecasting (QTF), on the other hand, includes the study of historic data to identify one of or a combination of several recognized universal technology diffusion or substitution patterns. In the same manner that quantitative models of physical phenomena provide excellent predictions of system behavior, so do QTF models provide reliable technological performance trajectories. In practice, a quantitative technology forecast is completed to ascertain with confidence when the projected performance of a technology or system of technologies will occur. Such projections provide reliable time-referenced information when considering cost and performance trade-offs in maintaining, replacing, or migrating a technology, component, or system. This paper introduces various quantitative technology forecasting techniques and illustrates their practical application in space technology and technology systems management.