PROPERTY APPRAISAL PROVIDES CONTROL, INSURANCE BASIS, AND VALUE ESTIMATE.
ERIC Educational Resources Information Center
THOMSON, JACK
A COMPLETE PROPERTY APPRAISAL SERVES AS A BASIS FOR CONTROL, INSURANCE AND VALUE ESTIMATE. A PROFESSIONAL APPRAISAL FIRM SHOULD PERFORM THIS FUNCTION BECAUSE (1) IT IS FAMILIAR WITH PROPER METHODS, (2) IT CAN PREPARE THE REPORT WITH MINIMUM CONFUSION AND INTERRRUPTION OF THE COLLEGE OPERATION, (3) USE OF ITS PRICING LIBRARY REDUCES TIME NEEDED AND…
Mackie, Iain D; DiLabio, Gino A
2011-10-07
The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)∕aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)∕aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent absolute deviation of only 1.7%, relative to the (estimated) complete basis set CCSD(T) results. Use of this composite approach to an additional set of eight dimers gave binding energies to within 1% of previously published high-level data. It is also shown that binding within parallel and parallel-crossed conformations of naphthalene dimer is predicted by the composite approach to be 9% greater than that previously reported in the literature. The ability of some recently developed dispersion-corrected density-functional theory methods to predict the binding energies of the set of ten small dimers was also examined. © 2011 American Institute of Physics
ERIC Educational Resources Information Center
Lafferty, Mark T.
2010-01-01
The number of project failures and those projects completed over cost and over schedule has been a significant issue for software project managers. Among the many reasons for failure, inaccuracy in software estimation--the basis for project bidding, budgeting, planning, and probability estimates--has been identified as a root cause of a high…
NASA Astrophysics Data System (ADS)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-01
We report the variation of the binding energy of the Formic Acid Dimer with the size of the basis set at the Coupled Cluster with iterative Singles, Doubles and perturbatively connected Triple replacements [CCSD(T)] level of theory, estimate the Complete Basis Set (CBS) limit, and examine the validity of the Basis Set Superposition Error (BSSE)-correction for this quantity that was previously challenged by Kalescky, Kraka, and Cremer (KKC) [J. Chem. Phys. 140, 084315 (2014)]. Our results indicate that the BSSE correction, including terms that account for the substantial geometry change of the monomers due to the formation of two strong hydrogen bonds in the dimer, is indeed valid for obtaining accurate estimates for the binding energy of this system as it exhibits the expected decrease with increasing basis set size. We attribute the discrepancy between our current results and those of KKC to their use of a valence basis set in conjunction with the correlation of all electrons (i.e., including the 1s of C and O). We further show that the use of a core-valence set in conjunction with all electron correlation converges faster to the CBS limit as the BSSE correction is less than half than the valence electron/valence basis set case. The uncorrected and BSSE-corrected binding energies were found to produce the same (within 0.1 kcal/mol) CBS limits. We obtain CCSD(T)/CBS best estimates for De = - 16.1 ± 0.1 kcal/mol and for D0 = - 14.3 ± 0.1 kcal/mol, the later in excellent agreement with the experimental value of -14.22 ± 0.12 kcal/mol.
Gender differences in self-rated and partner-rated multiple intelligences: a Portuguese replication.
Neto, Félix; Furnham, Adrian
2006-11-01
The authors examined gender differences and the influence of intelligence quotient (IQ) test experience in the self and partner estimation of H. Gardner's (1999) 10 multiple intelligences. Portuguese students (N = 190) completed a brief questionnaire developed on the basis of an instrument used in previous research (A. Furnham, 2001). Three of the 10 self-estimates yielded significant gender differences. Men believed they were more intelligent than were women on mathematical (logical), spatial, and naturalistic intelligence. Those who had previously completed an IQ test gave higher self-estimates on 2 of the 10 estimates. Factor analysis of the 10 and then 8 self-estimated scores did not confirm Gardner's 3-factor classification of multiple intelligences in this sample.
Glossary Defense Acquisition Acronyms and Terms
1991-09-01
of work to complete a job or part of a project . Actual Cost A cost sustained in fact, on the basis of costs incurred, as... of a project which shows the activities to be completed and the time to complete them is represented by horizontal lines drawn in proportion to the ...recorded for the total estimated obligations for a program or project in the initial year of funding. (For distinction, see Full
Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods.
Jaki, Thomas; Parry, Alice; Winter, Katherine; Hastings, Ian
2013-07-30
There are a variety of methods used to estimate the effectiveness of antimalarial drugs in clinical trials, invariably on a per-person basis. A person, however, may have more than one malaria infection present at the time of treatment. We evaluate currently used methods for analysing malaria trials on a per-individual basis and introduce a novel method to estimate the cure rate on a per-infection (clone) basis. We used simulated and real data to highlight the differences of the various methods. We give special attention to classifying outcomes as cured, recrudescent (infections that never fully cleared) or ambiguous on the basis of genetic markers at three loci. To estimate cure rates on a per-clone basis, we used the genetic information within an individual before treatment to determine the number of clones present. We used the genetic information obtained at the time of treatment failure to classify clones as recrudescence or new infections. On the per-individual level, we find that the most accurate methods of classification label an individual as newly infected if all alleles are different at the beginning and at the time of failure and as a recrudescence if all or some alleles were the same. The most appropriate analysis method is survival analysis or alternatively for complete data/per-protocol analysis a proportion estimate that treats new infections as successes. We show that the analysis of drug effectiveness on a per-clone basis estimates the cure rate accurately and allows more detailed evaluation of the performance of the treatment. Copyright © 2012 John Wiley & Sons, Ltd.
National Water Quality Benefits
This project will provide the basis for advancing the goal of producing tools in support of quantifying and valuing changes in water quality for EPA regulations. It will also identify specific data and modeling gaps and Improve benefits estimation for more complete benefit-cost a...
Control of Distributed Parameter Systems
1990-08-01
vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of
DOT National Transportation Integrated Search
1998-02-01
The Virginia Department of Transportation (VDOT) occasionally includes an incentive/ disincentive for early completion (I/D) in its construction contracts. This report presents the results of a project to identify procedures that would (1) enhance th...
Complete mitochondrial genome of the Freshwater Catfish Rita rita (Siluriformes, Bagridae).
Lashari, Punhal; Laghari, Muhammad Younis; Xu, Peng; Zhao, Zixia; Jiang, Li; Narejo, Naeem Tariq; Deng, Yulin; Sun, Xiaowen; Zhang, Yan
2015-01-01
The complete mitochondrial genome of Catfish, Rita rita, was isolated by LA PCR (TakaRa LAtaq, Dalian, China); and sequenced by Sanger's method to obtain the complete mitochondrial genome, which is listed Critically Endangered and Red Listed species. The complete mitogenome was 16,449 bp in length and contains 13 typical vertebrate protein-coding genes, 2 rRNA and 22 tRNA genes. The whole genome base composition was estimated to be 33.40% A, 27.43% C, 14.26% G and 24.89% T. The complete mitochondrial genome of catfish, Rita rita provides the basis for genetic breeding and conservation studies.
17 CFR 210.10-01 - Interim financial statements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... accounting policies and practices, details of accounts which have not changed significantly in amount or... completed fiscal year in such items as: accounting principles and practices; estimates inherent in the... cumulative effect of accounting changes, including such income on a per share basis, net income, net income...
Code of Federal Regulations, 2014 CFR
2014-07-01
... believe that the submitted information is true, accurate, and complete. I am aware that there are... or laboratory tests, literature, or economic analysis described in paragraphs (c) (5), (6) and (7) of....” which fully explains any values taken from literature or estimated on the basis of known information...
Code of Federal Regulations, 2012 CFR
2012-07-01
... believe that the submitted information is true, accurate, and complete. I am aware that there are... or laboratory tests, literature, or economic analysis described in paragraphs (c) (5), (6) and (7) of....” which fully explains any values taken from literature or estimated on the basis of known information...
Code of Federal Regulations, 2013 CFR
2013-07-01
... believe that the submitted information is true, accurate, and complete. I am aware that there are... or laboratory tests, literature, or economic analysis described in paragraphs (c) (5), (6) and (7) of....” which fully explains any values taken from literature or estimated on the basis of known information...
System identification principles in studies of forest dynamics.
Rolfe A. Leary
1970-01-01
Shows how it is possible to obtain governing equation parameter estimates on the basis of observed system states. The approach used represents a constructive alternative to regression techniques for models expressed as differential equations. This approach allows scientists to more completely quantify knowledge of forest development processes, to express theories in...
Cost Effectiveness of Alternative Route Special Education Teacher Preparation
ERIC Educational Resources Information Center
Sindelar, Paul T.; Dewey, James F.; Rosenberg, Michael S.; Corbett, Nancy L.; Denslow, David; Lotfinia, Babik
2012-01-01
In this study, the authors estimated costs of alternative route preparation to provide states a basis for allocating training funds to maximize production. Thirty-one special education alternative route program directors were interviewed and completed cost tables. Two hundred and twenty-four program graduates were also surveyed. The authors…
Code of Federal Regulations, 2011 CFR
2011-01-01
...) The Force Account Proposals (FAPs) are subject to review and approval by RUS. (e) The FAP is approved by RUS on the basis of estimated labor and material costs. The FAP is closed based on the borrower's... by the completed assembly units priced at the unit prices in the approved FAP. (Approved by the...
Code of Federal Regulations, 2013 CFR
2013-01-01
...) The Force Account Proposals (FAPs) are subject to review and approval by RUS. (e) The FAP is approved by RUS on the basis of estimated labor and material costs. The FAP is closed based on the borrower's... by the completed assembly units priced at the unit prices in the approved FAP. (Approved by the...
Code of Federal Regulations, 2010 CFR
2010-01-01
...) The Force Account Proposals (FAPs) are subject to review and approval by RUS. (e) The FAP is approved by RUS on the basis of estimated labor and material costs. The FAP is closed based on the borrower's... by the completed assembly units priced at the unit prices in the approved FAP. (Approved by the...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) The Force Account Proposals (FAPs) are subject to review and approval by RUS. (e) The FAP is approved by RUS on the basis of estimated labor and material costs. The FAP is closed based on the borrower's... by the completed assembly units priced at the unit prices in the approved FAP. (Approved by the...
Code of Federal Regulations, 2014 CFR
2014-01-01
...) The Force Account Proposals (FAPs) are subject to review and approval by RUS. (e) The FAP is approved by RUS on the basis of estimated labor and material costs. The FAP is closed based on the borrower's... by the completed assembly units priced at the unit prices in the approved FAP. (Approved by the...
Energy expenditure estimation during daily military routine with body-fixed sensors.
Wyss, Thomas; Mäder, Urs
2011-05-01
The purpose of this study was to develop and validate an algorithm for estimating energy expenditure during the daily military routine on the basis of data collected using body-fixed sensors. First, 8 volunteers completed isolated physical activities according to an established protocol, and the resulting data were used to develop activity-class-specific multiple linear regressions for physical activity energy expenditure on the basis of hip acceleration, heart rate, and body mass as independent variables. Second, the validity of these linear regressions was tested during the daily military routine using indirect calorimetry (n = 12). Volunteers' mean estimated energy expenditure did not significantly differ from the energy expenditure measured with indirect calorimetry (p = 0.898, 95% confidence interval = -1.97 to 1.75 kJ/min). We conclude that the developed activity-class-specific multiple linear regressions applied to the acceleration and heart rate data allow estimation of energy expenditure in 1-minute intervals during daily military routine, with accuracy equal to indirect calorimetry.
Identification of dynamic systems, theory and formulation
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1985-01-01
The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.
Barton, Garry R; Irvine, Lisa; Flather, Marcus; McCann, Gerry P; Curzen, Nick; Gershlick, Anthony H
2017-06-01
To determine the cost-effectiveness of complete revascularization at index admission compared with infarct-related artery (IRA) treatment only, in patients with multivessel disease undergoing primary percutaneous coronary intervention (P-PCI) for ST-segment elevation myocardial infarction. An economic evaluation of a multicenter randomized trial was conducted, comparing complete revascularization at index admission to IRA-only P-PCI in patients with multivessel disease (12-month follow-up). Overall hospital costs (costs for P-PCI procedure(s), hospital length of stay, and any subsequent re-admissions) were estimated. Outcomes were major adverse cardiac events (MACEs, a composite of all-cause death, recurrent myocardial infarction, heart failure, and ischemia-driven revascularization) and quality-adjusted life-years (QALYs) derived from the three-level EuroQol five-dimensional questionnaire. Multiple imputation was undertaken. The mean incremental cost and effect, with associated 95% confidence intervals, the incremental cost-effectiveness ratio, and the cost-effectiveness acceptability curve were estimated. On the basis of 296 patients, the mean incremental overall hospital cost for complete revascularization was estimated to be -£215.96 (-£1390.20 to £958.29), compared with IRA-only, with a per-patient mean reduction in MACEs of 0.170 (0.044 to 0.296) and a QALY gain of 0.011 (-0.019 to 0.041). According to the cost-effectiveness acceptability curve, the probability of complete revascularization being cost-effective was estimated to be 72.0% at a willingness-to-pay threshold value of £20,000 per QALY. Complete revascularization at index admission was estimated to be more effective (in terms of MACEs and QALYs) and cost-effective (overall costs were estimated to be lower and complete revascularization thereby dominated IRA-only). There was, however, some uncertainty associated with this decision. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Finite-error metrological bounds on multiparameter Hamiltonian estimation
NASA Astrophysics Data System (ADS)
Kura, Naoto; Ueda, Masahito
2018-01-01
Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.
Kendall, W.L.; Nichols, J.D.; Hines, J.E.
1997-01-01
Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.
A New Potential Energy Surface for N+O2: Is There an NOO Minimum?
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1995-01-01
We report a new calculation of the N+02 potential energy surface using complete active space self-consistent field internally contracted configuration interaction with the Dunning correlation consistent basis sets. The peroxy isomer of N02 is found to be a very shallow minimum separated from NO+O by a barrier of only 0.3 kcal/mol (excluding zero-point effects). The entrance channel barrier height is estimated to be 8.6 kcal/mol for ICCI+Q calculations correlating all but the Ols and N1s electrons with a cc-p VQZ basis set.
Is HO3 minimum cis or trans? An analytic full-dimensional ab initio isomerization path.
Varandas, A J C
2011-05-28
The minimum energy path for isomerization of HO(3) has been explored in detail using accurate high-level ab initio methods and techniques for extrapolation to the complete basis set limit. In agreement with other reports, the best estimates from both valence-only and all-electron single-reference methods here utilized predict the minimum of the cis-HO(3) isomer to be deeper than the trans-HO(3) one. They also show that the energy varies by less than 1 kcal mol(-1) or so over the full isomerization path. A similar result is found from valence-only multireference configuration interaction calculations with the size-extensive Davidson correction and a correlation consistent triple-zeta basis, which predict the energy difference between the two isomers to be of only Δ = -0.1 kcal mol(-1). However, single-point multireference calculations carried out at the optimum triple-zeta geometry with basis sets of the correlation consistent family but cardinal numbers up to X = 6 lead upon a dual-level extrapolation to the complete basis set limit of Δ = (0.12 ± 0.05) kcal mol(-1). In turn, extrapolations with the all-electron single-reference coupled-cluster method including the perturbative triples correction yield values of Δ = -0.19 and -0.03 kcal mol(-1) when done from triple-quadruple and quadruple-quintuple zeta pairs with two basis sets of increasing quality, namely cc-cpVXZ and aug-cc-pVXZ. Yet, if added a value of 0.25 kcal mol(-1) that accounts for the effect of triple and perturbative quadruple excitations with the VTZ basis set, one obtains a coupled cluster estimate of Δ = (0.14 ± 0.08) kcal mol(-1). It is then shown for the first time from systematic ab initio calculations that the trans-HO(3) isomer is more stable than the cis one, in agreement with the available experimental evidence. Inclusion of the best reported zero-point energy difference (0.382 kcal mol(-1)) from multireference configuration interaction calculations enhances further the relative stability to ΔE(ZPE) = (0.51 ± 0.08) kcal mol(-1). A scheme is also suggested to model the full-dimensional isomerization potential-energy surface using a quadratic expansion that is parametrically represented by a Fourier analysis in the torsion angle. The method illustrated at the raw and complete basis-set limit coupled-cluster levels can provide a valuable tool for a future analysis of the available (incomplete thus far) experimental rovibrational data. This journal is © the Owner Societies 2011
Parametric study of potential early commercial power plants Task 3-A MHD cost analysis
NASA Technical Reports Server (NTRS)
1983-01-01
The development of costs for an MHD Power Plant and the comparison of these costs to a conventional coal fired power plant are reported. The program is divided into three activities: (1) code of accounts review; (2) MHD pulverized coal power plant cost comparison; (3) operating and maintenance cost estimates. The scope of each NASA code of account item was defined to assure that the recently completed Task 3 capital cost estimates are consistent with the code of account scope. Improvement confidence in MHD plant capital cost estimates by identifying comparability with conventional pulverized coal fired (PCF) power plant systems is undertaken. The basis for estimating the MHD plant operating and maintenance costs of electricity is verified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Aprà, Edoardo; Xantheas, Sotiris S.
We establish a new estimate for the binding energy between two benzene molecules in the parallel-displaced (PD) conformation by systematically converging (i) the intra- and intermolecular geometry at the minimum, (ii) the expansion of the orbital basis set, and (iii) the level of electron correlation. The calculations were performed at the second-order Møller–Plesset perturbation (MP2) and the coupled cluster including singles, doubles, and a perturbative estimate of triples replacement [CCSD(T)] levels of electronic structure theory. At both levels of theory, by including results corrected for basis set superposition error (BSSE), we have estimated the complete basis set (CBS) limit bymore » employing the family of Dunning’s correlation-consistent polarized valence basis sets. The largest MP2 calculation was performed with the cc-pV6Z basis set (2772 basis functions), whereas the largest CCSD(T) calculation was with the cc-pV5Z basis set (1752 basis functions). The cluster geometries were optimized with basis sets up to quadruple-ζ quality, observing that both its intra- and intermolecular parts have practically converged with the triple-ζ quality sets. The use of converged geometries was found to play an important role for obtaining accurate estimates for the CBS limits. Our results demonstrate that the binding energies with the families of the plain (cc-pVnZ) and augmented (aug-cc-pVnZ) sets converge [within <0.01 kcal/mol for MP2 and <0.15 kcal/mol for CCSD(T)] to the same CBS limit. In addition, the average of the uncorrected and BSSE-corrected binding energies was found to converge to the same CBS limit much faster than either of the two constituents (uncorrected or BSSE-corrected binding energies). Due to the fact that the family of augmented basis sets (especially for the larger sets) causes serious linear dependency problems, the plain basis sets (for which no linear dependencies were found) are deemed as a more efficient and straightforward path for obtaining an accurate CBS limit. We considered extrapolations of the uncorrected (ΔE) and BSSE-corrected (ΔE cp) binding energies, their average value (ΔE ave), as well as the average of the latter over the plain and augmented sets (Δ~E ave) with the cardinal number of the basis set n. Our best estimate of the CCSD(T)/CBS limit for the π–π binding energy in the PD benzene dimer is D e = -2.65 ± 0.02 kcal/mol. The best CCSD(T)/cc-pV5Z calculated value is -2.62 kcal/mol, just 0.03 kcal/mol away from the CBS limit. For comparison, the MP2/CBS limit estimate is -5.00 ± 0.01 kcal/mol, demonstrating a 90% overbinding with respect to CCSD(T). Finally, the spin-component-scaled (SCS) MP2 variant was found to closely reproduce the CCSD(T) results for each basis set, while scaled opposite spin (SOS) MP2 yielded results that are too low when compared to CCSD(T).« less
Lumber and plywood used in California apartment construction, 1969
George B. Harpole
1973-01-01
The volume of lumber and plywood products used in apartment construction in California was estimated from a sample of apartments for which architectural plans were completed in 1969. Excluding wood mouldings, doors, cabinets, and shelving, an average of 4.85 board feet of lumber and 2.03 square feet (318-inch basis) of plywood per square foot of floor area were used in...
Oil and gas pipeline construction cost analysis and developing regression models for cost estimation
NASA Astrophysics Data System (ADS)
Thaduri, Ravi Kiran
In this study, cost data for 180 pipelines and 136 compressor stations have been analyzed. On the basis of the distribution analysis, regression models have been developed. Material, Labor, ROW and miscellaneous costs make up the total cost of a pipeline construction. The pipelines are analyzed based on different pipeline lengths, diameter, location, pipeline volume and year of completion. In a pipeline construction, labor costs dominate the total costs with a share of about 40%. Multiple non-linear regression models are developed to estimate the component costs of pipelines for various cross-sectional areas, lengths and locations. The Compressor stations are analyzed based on the capacity, year of completion and location. Unlike the pipeline costs, material costs dominate the total costs in the construction of compressor station, with an average share of about 50.6%. Land costs have very little influence on the total costs. Similar regression models are developed to estimate the component costs of compressor station for various capacities and locations.
NASA Astrophysics Data System (ADS)
Waeldele, F.
1983-01-01
The influence of sample shape deviations on the measurement uncertainties and the optimization of computer aided coordinate measurement were investigated for a circle and a cylinder. Using the complete error propagation law in matrix form the parameter uncertainties are calculated, taking the correlation between the measurement points into account. Theoretical investigations show that the measuring points have to be equidistantly distributed and that for a cylindrical body a measuring point distribution along a cross section is better than along a helical line. The theoretically obtained expressions to calculate the uncertainties prove to be a good estimation basis. The simple error theory is not satisfactory for estimation. The complete statistical data analysis theory helps to avoid aggravating measurement errors and to adjust the number of measuring points to the required measuring uncertainty.
Evaluation of active cooling systems for a Mach 6 hypersonic transport airframe, part 2
NASA Technical Reports Server (NTRS)
Helenbrook, R. G.; Mcconarty, W. A.; Anthony, F. M.
1971-01-01
Transpiration and convective cooling concepts are examined for the fuselage and tail surface of a Mach 6 hypersonic transport aircraft. Hydrogen, helium, and water are considered as coolants. Heat shields and radiation barriers are examined to reduce heat flow to the cooled structures. The weight and insulation requirements for the cryogenic fuel tanks are examined so that realistic totals can be estimated for the complete fuselage and tail. Structural temperatures are varied to allow comparison of aluminum alloy, titanium alloy, and superalloy contruction materials. The results of the study are combined with results obtained on the wing structure, obtained in a previous study, to estimate weights for the complete airframe. The concepts are compared among themselves, and with the uncooled concept on the basis of structural weight, cooling system weight, and coolant weight.
Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol; ...
2017-04-07
Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, Linda; Bruneval, Fabien; Baishya, Kopinjol
Energies from the GW approximation and the Bethe–Salpeter equation (BSE) are benchmarked against the excitation energies of transition-metal (Cu, Zn, Ag, and Cd) single atoms and monoxide anions. We demonstrate that best estimates of GW quasiparticle energies at the complete basis set limit should be obtained via extrapolation or closure relations, while numerically converged GW-BSE eigenvalues can be obtained on a finite basis set. Calculations using real-space wave functions and pseudopotentials are shown to give best-estimate GW energies that agree (up to the extrapolation error) with calculations using all-electron Gaussian basis sets. We benchmark the effects of a vertex approximationmore » (ΓLDA) and the mean-field starting point in GW and the BSE, performing computations using a real-space, transition-space basis and scalar-relativistic pseudopotentials. Here, while no variant of GW improves on perturbative G0W0 at predicting ionization energies, G0W0Γ LDA-BSE computations give excellent agreement with experimental absorption spectra as long as off-diagonal self-energy terms are included. We also present G0W0 quasiparticle energies for the CuO –, ZnO –, AgO –, and CdO – anions, in comparison to available anion photoelectron spectra.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Gurmeet; Nandi, Apurba; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in
2016-03-14
A pragmatic method based on the molecular tailoring approach (MTA) for estimating the complete basis set (CBS) limit at Møller-Plesset second order perturbation (MP2) theory accurately for large molecular clusters with limited computational resources is developed. It is applied to water clusters, (H{sub 2}O){sub n} (n = 7, 8, 10, 16, 17, and 25) optimized employing aug-cc-pVDZ (aVDZ) basis-set. Binding energies (BEs) of these clusters are estimated at the MP2/aug-cc-pVNZ (aVNZ) [N = T, Q, and 5 (whenever possible)] levels of theory employing grafted MTA (GMTA) methodology and are found to lie within 0.2 kcal/mol of the corresponding full calculationmore » MP2 BE, wherever available. The results are extrapolated to CBS limit using a three point formula. The GMTA-MP2 calculations are feasible on off-the-shelf hardware and show around 50%–65% saving of computational time. The methodology has a potential for application to molecular clusters containing ∼100 atoms.« less
Many-body calculations of molecular electric polarizabilities in asymptotically complete basis sets
NASA Astrophysics Data System (ADS)
Monten, Ruben; Hajgató, Balázs; Deleuze, Michael S.
2011-10-01
The static dipole polarizabilities of Ne, CO, N2, F2, HF, H2O, HCN, and C2H2 (acetylene) have been determined close to the Full-CI limit along with an asymptotically complete basis set (CBS), according to the principles of a Focal Point Analysis. For this purpose the results of Finite Field calculations up to the level of Coupled Cluster theory including Single, Double, Triple, Quadruple and perturbative Pentuple excitations [CCSDTQ(P)] were used, in conjunction with suited extrapolations of energies obtained using augmented and doubly-augmented Dunning's correlation consistent polarized valence basis sets of improving quality. The polarizability characteristics of C2H4 (ethylene) and C2H6 (ethane) have been determined on the same grounds at the CCSDTQ level in the CBS limit. Comparison is made with results obtained using lower levels in electronic correlation, or taking into account the relaxation of the molecular structure due to an adiabatic polarization process. Vibrational corrections to electronic polarizabilities have been empirically estimated according to Born-Oppenheimer Molecular Dynamical simulations employing Density Functional Theory. Confrontation with experiment ultimately indicates relative accuracies of the order of 1 to 2%.
A correlated ab initio study of linear carbon-chain radicals CnH (n = 2-7)
NASA Technical Reports Server (NTRS)
Woon, D. E.; Loew, G. H. (Principal Investigator)
1995-01-01
Linear carbon-chain radicals CnH for n = 2-7 have been studied with correlation consistent valence and core-valence basis sets and the coupled cluster method RCCSD(T). Equilibrium structures, rotational constants, and dipole moments are reported and compared with available experimental data. The ground state of the even-n series changes from 2 sigma+ to 2 pi as the chain is extended. For C4H, the 2 sigma+ state was found to lie only 72 cm-1 below the 2 pi state in the estimated complete basis set limit for valence correlation. The C2H- and C3H- anions have also been characterized.
A correlated ab initio study of the A2 pi <-- X2 sigma+ transition in MgCCH
NASA Technical Reports Server (NTRS)
Woon, D. E.
1997-01-01
The A2 pi <-- X2 sigma+ transition in MgCCH was studied with correlation consistent basis sets and single- and multireference correlation methods. The A2 pi excited state was characterized in detail; the x2 sigma+ ground state has been described elsewhere recently. The estimated complete basis set (CBS) limits for valence correlation, including zero-point energy corrections, are 22668, 23191, and 22795 for the RCCSD(T), MRCI, and MRCI + Q methods, respectively. A core-valence correction of +162 cm-1 shifts the RCCSD(T) value to 22830 cm-1, in good agreement with the experimental result of 22807 cm-1.
Cronin, John; Storey, Adam; Zourdos, Michael C.
2016-01-01
ABSTRACT RATINGS OF PERCEIVED EXERTION ARE A VALID METHOD OF ESTIMATING THE INTENSITY OF A RESISTANCE TRAINING EXERCISE OR SESSION. SCORES ARE GIVEN AFTER COMPLETION OF AN EXERCISE OR TRAINING SESSION FOR THE PURPOSES OF ATHLETE MONITORING. HOWEVER, A NEWLY DEVELOPED SCALE BASED ON HOW MANY REPETITIONS ARE REMAINING AT THE COMPLETION OF A SET MAY BE A MORE PRECISE TOOL. THIS APPROACH ADJUSTS LOADS AUTOMATICALLY TO MATCH ATHLETE CAPABILITIES ON A SET-TO-SET BASIS AND MAY MORE ACCURATELY GAUGE INTENSITY AT NEAR-LIMIT LOADS. THIS ARTICLE OUTLINES HOW TO INCORPORATE THIS NOVEL SCALE INTO A TRAINING PLAN. PMID:27531969
An ab initio benchmark study of the H + CO --> HCO reaction
NASA Technical Reports Server (NTRS)
Woon, D. E.
1996-01-01
The H + CO --> HCO reaction has been characterized with correlation consistent basis sets at five levels of theory in order to benchmark the sensitivities of the barrier height and reaction ergicity to the one-electron and n-electron expansions of the electronic wave function. Single and multireference methods are compared and contrasted. The coupled cluster method RCCSD(T) was found to be in very good agreement with Davidson-corrected internally-contracted multireference configuration interaction (MRCI+Q). Second-order Moller-Plesset perturbation theory (MP2) was also employed. The estimated complete basis set (CBS) limits for the barrier height (in kcal/mol) for the five methods, including harmonic zero-point energy corrections, are MP2, 4.66; RCCSD, 4.78; RCCSD(T), 4.15; MRCI, 5.10; and MRCI+Q, 4.07. Similarly, the estimated CBS limits for the ergicity of the reaction are: MP2, -17.99; RCCSD, -13.34; RCCSD(T), -13.79; MRCI, -11.46; and MRCI+Q, -13.70. Additional basis set explorations for the RCCSD(T) method demonstrate that aug-cc-pVTZ sets, even with some functions removed, are sufficient to reproduce the CBS limits to within 0.1-0.3 kcal/mol.
Changes in whole-body metabolic parameters associated with radiation
NASA Astrophysics Data System (ADS)
Ahlers, I.
1994-10-01
Continuous irradiation of experimental animals is an appropriate model for the research in space radiobiology. The onset and recovery of radiation injury can be estimated on the basis of the concentration/content of glycogen in liver, the phospholipid content in thymus and other radiosensitive organs and the triacylglycerol concentration in bone marrow. Further, the picture of the metabolism in irradiated organism may be completed by the analysis of serum glucocorticoid and thyroid hormone levels.
Time estimation predicts mathematical intelligence.
Kramer, Peter; Bressan, Paola; Grassi, Massimo
2011-01-01
Performing mental subtractions affects time (duration) estimates, and making time estimates disrupts mental subtractions. This interaction has been attributed to the concurrent involvement of time estimation and arithmetic with general intelligence and working memory. Given the extant evidence of a relationship between time and number, here we test the stronger hypothesis that time estimation correlates specifically with mathematical intelligence, and not with general intelligence or working-memory capacity. Participants performed a (prospective) time estimation experiment, completed several subtests of the WAIS intelligence test, and self-rated their mathematical skill. For five different durations, we found that time estimation correlated with both arithmetic ability and self-rated mathematical skill. Controlling for non-mathematical intelligence (including working memory capacity) did not change the results. Conversely, correlations between time estimation and non-mathematical intelligence either were nonsignificant, or disappeared after controlling for mathematical intelligence. We conclude that time estimation specifically predicts mathematical intelligence. On the basis of the relevant literature, we furthermore conclude that the relationship between time estimation and mathematical intelligence is likely due to a common reliance on spatial ability.
Inventory Data Package for Hanford Assessments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kincaid, Charles T.; Eslinger, Paul W.; Aaberg, Rosanne L.
2006-06-01
This document presents the basis for a compilation of inventory for radioactive contaminants of interest by year for all potentially impactive waste sites on the Hanford Site for which inventory data exist in records or could be reasonably estimated. This document also includes discussions of the historical, current, and reasonably foreseeable (1944 to 2070) future radioactive waste and waste sites; the inventories of radionuclides that may have a potential for environmental impacts; a description of the method(s) for estimating inventories where records are inadequate; a description of the screening method(s) used to select those sites and contaminants that might makemore » a substantial contribution to impacts; a listing of the remedial actions and their completion dates for waste sites; and tables showing the best estimate inventories available for Hanford assessments.« less
How many species of flowering plants are there?
Joppa, Lucas N.; Roberts, David L.; Pimm, Stuart L.
2011-01-01
We estimate the probable number of flowering plants. First, we apply a model that explicitly incorporates taxonomic effort over time to estimate the number of as-yet-unknown species. Second, we ask taxonomic experts their opinions on how many species are likely to be missing, on a family-by-family basis. The results are broadly comparable. We show that the current number of species should grow by between 10 and 20 per cent. There are, however, interesting discrepancies between expert and model estimates for some families, suggesting that our model does not always completely capture patterns of taxonomic activity. The as-yet-unknown species are probably similar to those taxonomists have described recently—overwhelmingly rare and local, and disproportionately in biodiversity hotspots, where there are high levels of habitat destruction. PMID:20610425
Accuracy of Consumer Monitors for Estimating Energy Expenditure and Activity Type.
Woodman, James A; Crouter, Scott E; Bassett, David R; Fitzhugh, Eugene C; Boyer, William R
2017-02-01
Increasing use of consumer-based physical activity (PA) monitors necessitates that they are validated against criterion measures. Thus, the purpose of this study was to examine the accuracy of three consumer-based PA monitors for estimating energy expenditure (EE) and PA type during simulated free-living activities. Twenty-eight participants (mean ± SD: age, 25.5 ± 3.7 yr; body mass index, 24.9 ± 2.6 kg·m) completed 11 activities ranging from sedentary behaviors to vigorous intensities. Simultaneous measurements were made with an Oxycon portable calorimeter (criterion), a Basis Peak and Garmin Vivofit on the nondominant wrist, and three Withings Pulse devices (right hip, shirt collar, dominant wrist). Repeated-measures ANOVA were used to examine differences between measured and predicted EE. Intraclass correlation coefficients were calculated to determine reliability of EE predictions between Withings placements. Paired samples t tests were used to determine mean differences between observed minutes and Basis Peak predictions during walking, running, and cycling. On average, the Basis Peak was within 8% of measured EE for the entire PA routine (P > 0.05); however, there were large individual errors (95% prediction interval, -290.4 to +233.1 kcal). All other devices were significantly different from measured EE for the entire PA routine (P < 0.05). For activity types, Basis Peak correctly identified ≥92% of actual minutes spent walking and running (P > 0.05), and 40.4% and 0% of overground and stationary cycling minutes, respectively (P < 0.001). The Basis Peak was the only device that did not significantly differ from measured EE; however, it also had the largest individual errors. Additionally, the Basis Peak accurately predicted minutes spent walking and running, but not cycling.
A Correlated Ab Initio Study of Linear Carbon-Chain Radicals C(sub n)H (n=2-7)
NASA Technical Reports Server (NTRS)
Woon, David E.
1995-01-01
Linear carbon-chain radicals C(sub n) H for n = 2-7 have been studied with correlation consistent valence and core-valence basis sets and the coupled cluster method RCCSD(T). Equilibrium structures, rotational constants, and dipole moments are reported and compared with available experimental data. The ground state of the even-n series changes from 2Sigma(+) to 2Pi as the chain is extended. For C4H, the 2Sigma(+) state was found to lie only 72 cm(exp -1) below the 2Pi state in the estimated complete basis set limit for valence correlation. The C2H(-) and C3H(-) anions have also been characterized.
Fallout Deposition in the Marshall Islands from Bikini and Enewetak Nuclear Weapons Tests
Beck, Harold L.; Bouville, André; Moroz, Brian E.; Simon, Steven L.
2009-01-01
Deposition densities (Bq m-2) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for all the 31 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 25 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers. PMID:20622548
Fallout deposition in the Marshall Islands from Bikini and Enewetak nuclear weapons tests.
Beck, Harold L; Bouville, André; Moroz, Brian E; Simon, Steven L
2010-08-01
Deposition densities (Bq m(-2)) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for 32 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 23 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varandas, A. J. C., E-mail: varandas@uc.pt; Departamento de Física, Universidade Federal do Espírito Santo, 29075-910 Vitória; Pansini, F. N. N.
2014-12-14
A method previously suggested to calculate the correlation energy at the complete one-electron basis set limit by reassignment of the basis hierarchical numbers and use of the unified singlet- and triplet-pair extrapolation scheme is applied to a test set of 106 systems, some with up to 48 electrons. The approach is utilized to obtain extrapolated correlation energies from raw values calculated with second-order Møller-Plesset perturbation theory and the coupled-cluster singles and doubles excitations method, some of the latter also with the perturbative triples corrections. The calculated correlation energies have also been used to predict atomization energies within an additive scheme.more » Good agreement is obtained with the best available estimates even when the (d, t) pair of hierarchical numbers is utilized to perform the extrapolations. This conceivably justifies that there is no strong reason to exclude double-zeta energies in extrapolations, especially if the basis is calibrated to comply with the theoretical model.« less
NASA Astrophysics Data System (ADS)
Liu, B.; McLean, A. D.
1989-08-01
We report the LM-2 helium dimer interaction potential, from helium separations of 1.6 Å to dissociation, obtained by careful convergence studies with respect to configuration space, through a sequence of interacting correlated fragment (ICF) wave functions, and with respect to the primitive Slater-type basis used for orbital expansion. Parameters of the LM-2 potential are re=2.969 Å, rm=2.642 Å, and De=10.94 K, in near complete agreement with those of the best experimental potential of Aziz, McCourt, and Wong [Mol. Phys. 61, 1487 (1987)], which are re=2.963 Å, rm=2.637 Å, and De=10.95 K. The computationally estimated accuracy of each point on the potential is given; at re it is 0.03 K. Extrapolation procedures used to produce the LM-2 potential make use of the orbital basis inconsistency (OBI) and configuration base inconsistency (CBI) adjustments to separated fragment energies when computing the interaction energy. These components of basis set superposition error (BSSE) are given a full discussion.
Scharfenberg, Janna; Schaper, Katharina; Krummenauer, Frank
2014-01-01
The German "Dr med" plays a specific role in doctoral thesis settings since students may start the underlying doctoral project during their studies at medical school. If a Medical Faculty principally encourages this approach, then it should support the students in performing the respective projects as efficiently as possible. Consequently, it must be ensured that students are able to implement and complete a doctoral project in parallel to their studies. As a characteristic efficiency feature of these "Dr med" initiatives, the proportion of doctoral projects successfully completed shortly after graduating from medical school is proposed and illustrated. The proposed characteristic can be estimated by the time period between the state examination (date of completion of the qualifying medical examination) and the doctoral examination. Completion of the doctoral project "during their medical studies" was then characterised by a doctoral examination no later than 12 months after the qualifying medical state examination. To illustrate the estimation and interpretation of this characteristic, it was retrospectively estimated on the basis of the full sample of all doctorates successfully completed between July 2009 and June 2012 at the Department of Human Medicine at the Faculty of Health of the University of Witten/Herdecke. During the period of investigation defined, a total number of 56 doctoral examinations were documented, 30 % of which were completed within 12 months after the qualifying medical state examination (95% confidence interval 19 to 44 %). The median duration between state and doctoral examination was 27 months. The proportion of doctoral projects completed parallel to the medical studies increased during the investigation period from 14 % in the first year (July 2009 till June 2010) to 40 % in the third year (July 2011 till June 2012). Only about a third of all "Dr med" projects at the Witten/Herdecke Faculty of Health were completed during or close to the qualifying medical studies. This proportion, however, increased after the introduction of a curriculum on research methodology and practice in 2010; prospective longitudinal studies will have to clarify whether this is causal or mere chronological coincidence. In summary, the proposed method for determining the process efficiency of a medical faculty's "Dr med" programme has proven to be both feasible and informative. Copyright © 2014. Published by Elsevier GmbH.
Analysis of multilocus zygotic associations.
Yang, Rong-Cai
2002-05-01
While nonrandom associations between zygotes at different loci (zygotic associations) frequently occur in Hardy-Weinberg disequilibrium populations, statistical analysis of such associations has received little attention. In this article, we describe the joint distributions of zygotes at multiple loci, which are completely characterized by heterozygosities at individual loci and various multilocus zygotic associations. These zygotic associations are defined in the same fashion as the usual multilocus linkage (gametic) disequilibria on the basis of gametic and allelic frequencies. The estimation and test procedures are described with details being given for three loci. The sampling properties of the estimates are examined through Monte Carlo simulation. The estimates of three-locus associations are not free of bias due to the presence of two-locus associations and vice versa. The power of detecting the zygotic associations is small unless different loci are strongly associated and/or sample sizes are large (>100). The analysis of zygotic associations not only offers an effective means of packaging numerous genic disequilibria required for a complete characterization of multilocus structure, but also provides opportunities for making inference about evolutionary and demographic processes through a comparative assessment of zygotic association vs. gametic disequilibrium for the same set of loci in nonequilibrium populations.
NASA Astrophysics Data System (ADS)
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-01
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol-1. The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
Maranzana, Andrea; Giordana, Anna; Indarto, Antonius; Tonachini, Glauco; Barone, Vincenzo; Causà, Mauro; Pavone, Michele
2013-12-28
Our purpose is to identify a computational level sufficiently dependable and affordable to assess trends in the interaction of a variety of radical or closed shell unsaturated hydro-carbons A adsorbed on soot platelet models B. These systems, of environmental interest, would unavoidably have rather large sizes, thus prompting to explore in this paper the performances of relatively low-level computational methods and compare them with higher-level reference results. To this end, the interaction of three complexes between non-polar species, vinyl radical, ethyne, or ethene (A) with benzene (B) is studied, since these species, involved themselves in growth processes of polycyclic aromatic hydrocarbons (PAHs) and soot particles, are small enough to allow high-level reference calculations of the interaction energy ΔEAB. Counterpoise-corrected interaction energies ΔEAB are used at all stages. (1) Density Functional Theory (DFT) unconstrained optimizations of the A-B complexes are carried out, using the B3LYP-D, ωB97X-D, and M06-2X functionals, with six basis sets: 6-31G(d), 6-311 (2d,p), and 6-311++G(3df,3pd); aug-cc-pVDZ and aug-cc-pVTZ; N07T. (2) Then, unconstrained optimizations by Møller-Plesset second order Perturbation Theory (MP2), with each basis set, allow subsequent single point Coupled Cluster Singles Doubles and perturbative estimate of the Triples energy computations with the same basis sets [CCSD(T)//MP2]. (3) Based on an additivity assumption of (i) the estimated MP2 energy at the complete basis set limit [EMP2/CBS] and (ii) the higher-order correlation energy effects in passing from MP2 to CCSD(T) at the aug-cc-pVTZ basis set, ΔECC-MP, a CCSD(T)/CBS estimate is obtained and taken as a computational energy reference. At DFT, variations in ΔEAB with basis set are not large for the title molecules, and the three functionals perform rather satisfactorily even with rather small basis sets [6-31G(d) and N07T], exhibiting deviation from the computational reference of less than 1 kcal mol(-1). The zero-point vibrational energy corrected estimates Δ(EAB+ZPE), obtained with the three functionals and the 6-31G(d) and N07T basis sets, are compared with experimental D0 measures, when available. In particular, this comparison is finally extended to the naphthalene and coronene dimers and to three π-π associations of different PAHs (R, made by 10, 16, or 24 C atoms) and P (80 C atoms).
On the ab initio evaluation of Hubbard parameters. II. The κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal
NASA Astrophysics Data System (ADS)
Fortunelli, Alessandro; Painelli, Anna
1997-05-01
A previously proposed approach for the ab initio evaluation of Hubbard parameters is applied to BEDT-TTF dimers. The dimers are positioned according to four geometries taken as the first neighbors from the experimental data on the κ-(BEDT-TTF)2Cu[N(CN)2]Br crystal. RHF-SCF, CAS-SCF and frozen-orbital calculations using the 6-31G** basis set are performed with different values of the total charge, allowing us to derive all the relevant parameters. It is found that the electronic structure of the BEDT-TTF planes is adequately described by the standard Extended Hubbard Model, with the off-diagonal electron-electron interaction terms (X and W) of negligible size. The derived parameters are in good agreement with available experimental data. Comparison with previous theoretical estimates shows that the t values compare well with those obtained from Extended Hückel Theory (whereas the minimal basis set estimates are completely unreliable). On the other hand, the Uaeff values exhibit an appreciable dependence on the chemical environment.
Wilson, James H; Mullen, Maureen A; Bollman, Andrew D; Thesing, Kirstin B; Salhotra, Manish; Divita, Frank; Neumann, James E; Price, Jason C; DeMocker, James
2008-05-01
Section 812 of the Clean Air Act Amendments (CAAA) of 1990 requires the U.S. Environmental Protection Agency (EPA) to perform periodic, comprehensive analyses of the total costs and total benefits of programs implemented pursuant to the CAAA. The first prospective analysis was completed in 1999. The second prospective analysis was initiated during 2005. The first step in the second prospective analysis was the development of base and projection year emission estimates that will be used to generate benefit estimates of CAAA programs. This paper describes the analysis, methods, and results of the recently completed emission projections. There are several unique features of this analysis. One is the use of consistent economic assumptions from the Department of Energy's Annual Energy Outlook 2005 (AEO 2005) projections as the basis for estimating 2010 and 2020 emissions for all sectors. Another is the analysis of the different emissions paths for both with and without CAAA scenarios. Other features of this analysis include being the first EPA analysis that uses the 2002 National Emission Inventory files as the basis for making 48-state emission projections, incorporating control factor files from the Regional Planning Organizations (RPOs) that had completed emission projections at the time the analysis was performed, and modeling the emission benefits of the expected adoption of measures to meet the 8-hr ozone National Ambient Air Quality Standards (NAAQS), the Clean Air Visibility Rule, and the PM2.5 NAAQS. This analysis shows that the 1990 CAAA have produced significant reductions in criteria pollutant emissions since 1990 and that these emission reductions are expected to continue through 2020. CAAA provisions have reduced volatile organic compound (VOC) emissions by approximately 7 million t/yr by 2000, and are estimated to produce associated VOC emission reductions of 16.7 million t by 2020. Total oxides of nitrogen (NO(x)) emission reductions attributable to the CAAA are 5, 12, and 17 million t in 2000, 2010, and 2020, respectively. Sulfur dioxide (SO2) emission benefits during the study period are dominated by electricity-generating unit (EGU) SO2 emission reductions. These EGU emission benefits go from 7.5 million t reduced in 2000 to 15 million t reduced in 2020.
1983-01-01
formulated to achieve weight savings, successfully completed a 100 hour pump test with no loss of fluid viscosity. Initial synthetic approaches to...acquisition cost savings in one concept and 51 percent weight savings in another. (9) (U) Advanced 3D Materials: Demonstrated the feasibility of using...in the gas phase. This research will provide the basis for future laser weapons that are more efficient, powerful, and lighter weight than infrared
Dense Velocity Field of Turkey
NASA Astrophysics Data System (ADS)
Ozener, H.; Aktug, B.; Dogru, A.; Tasci, L.
2017-12-01
While the GNSS-based crustal deformation studies in Turkey date back to early 1990s, a homogenous velocity field utilizing all the available data is still missing. Regional studies employing different site distributions, observation plans, processing software and methodology not only create reference frame variations but also heterogeneous stochastic models. While the reference frame effect between different velocity fields could easily be removed by estimating a set of rotations, the homogenization of the stochastic models of the individual velocity fields requires a more detailed analysis. Using a rigorous Variance Component Estimation (VCE) methodology, we estimated the variance factors for each of the contributing velocity fields and combined them into a single homogenous velocity field covering whole Turkey. Results show that variance factors between velocity fields including the survey mode and continuous observations can vary a few orders of magnitude. In this study, we present the most complete velocity field in Turkey rigorously combined from 20 individual velocity fields including the 146 station CORS network and totally 1072 stations. In addition, three GPS campaigns were performed along the North Anatolian Fault and Aegean Region to fill the gap between existing velocity fields. The homogenously combined new velocity field is nearly complete in terms of geographic coverage, and will serve as the basis for further analyses such as the estimation of the deformation rates and the determination of the slip rates across main fault zones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/ormore » second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.« less
NASA Astrophysics Data System (ADS)
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sherrill, C. David; Byrd, Edward F. C.; Head-Gordon, Martin
2000-07-22
A recent study by Ahmed, Peterka, and Suits [J. Chem. Phys. 110, 4248 (1999)] has presented the first experimentally derived estimate of the singlet-triplet gap in the simplest alkyne, acetylene. Their value, T{sub 0}(a(tilde sign) {sup 3}B{sub 2})=28 900 cm{sup -1}, does not agree with previous theoretical predictions using the coupled-cluster singles, doubles, and perturbative triples [CCSD(T)] method and a triple-{zeta} plus double polarization plus f-function basis set (TZ2P f ), which yields 30 500{+-}1000 cm{sup -1}. This discrepancy has prompted us to investigate possible deficiencies in this usually-accurate theoretical approach. Employing extrapolations to the complete basis set limit alongmore » with corrections for full connected triple excitations, core correlation, and even relativistic effects, we obtain a value of 30 900 cm-1 (estimated uncertainty {+-}230 cm-1), demonstrating that the experimental value is underestimated. To assist in the interpretation of anticipated future experiments, we also present highly accurate excitation energies for the other three low-lying triplet states of acetylene, a(tilde sign) {sup 3}B{sub u}(33 570{+-}230 cm{sup -1}), b(tilde sign) {sup 3}A{sub u}(36 040{+-}260 cm{sup -1}), and b(tilde sign) {sup 3}A{sub 2}(38 380{+-}260 cm{sup -1}), and the three lowest-lying states of vinylidene, X(tilde sign) {sup 1}A{sub 1}(15 150{+-}230 cm{sup -1}), a(tilde sign) {sup 3}B{sub 2}(31 870{+-}230 cm{sup -1}), and b(tilde sign) {sup 3}A{sub 2}(36 840{+-}350 cm{sup -1}). Finally, we assess the ability of density functional theory (DFT) and the Gaussian-3 method to match our benchmark results for adiabatic excitation energies of C{sub 2}H{sub 2}. (c) 2000 American Institute of Physics.« less
What influences midwives in estimating labour pain?
Williams, A C de C; Morris, J; Stevens, K; Gessler, S; Cella, M; Baxter, J
2013-01-01
Clinicians' estimates of patients' pain are frequently used as a basis for delivering care, and the characteristics of the clinician and of the patient influence this estimate. We studied pain estimation by midwives attending women in uncomplicated labour. Sixty-six practising midwives of varied age, ethnicity and professional experience were asked to complete a trait empathy measure and then to estimate the maximum pain and anxiety experienced by six women whose filmed labour contractions they viewed. Additionally, they rated similarity to the labouring women in ethnicity, and described their beliefs about pain expression according to ethnicity. Midwife estimates of pain and anxiety were highly correlated. Longer professional experience was associated with lower pain estimates, while more births to the midwife herself was associated with higher pain estimates. A multiple regression model identified number of births to the midwife herself, and two components of empathy (perspective taking and identification), to be important in predicting midwife pain estimates for women in labour. Midwives expressed clear beliefs about women's expression of pain during labour according to ethnicity, but these beliefs were not consistent across midwives, even between midwives of similar ethnicity. Midwives' personal characteristics can bias the estimation of pain in woman in labour and therefore influence treatment. © 2012 European Federation of International Association for the Study of Pain Chapters.
Application of copulas to improve covariance estimation for partial least squares.
D'Angelo, Gina M; Weissfeld, Lisa A
2013-02-20
Dimension reduction techniques, such as partial least squares, are useful for computing summary measures and examining relationships in complex settings. Partial least squares requires an estimate of the covariance matrix as a first step in the analysis, making this estimate critical to the results. In addition, the covariance matrix also forms the basis for other techniques in multivariate analysis, such as principal component analysis and independent component analysis. This paper has been motivated by an example from an imaging study in Alzheimer's disease where there is complete separation between Alzheimer's and control subjects for one of the imaging modalities. This separation occurs in one block of variables and does not occur with the second block of variables resulting in inaccurate estimates of the covariance. We propose the use of a copula to obtain estimates of the covariance in this setting, where one set of variables comes from a mixture distribution. Simulation studies show that the proposed estimator is an improvement over the standard estimators of covariance. We illustrate the methods from the motivating example from a study in the area of Alzheimer's disease. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Dateo, Christopher E.; Walch, Stephen P.
2002-01-01
As part of NASA Ames Research Center's Integrated Process Team on Device/Process Modeling and Nanotechnology our goal is to create/contribute to a gas-phase chemical database for use in modeling microelectronics devices. In particular, we use ab initio methods to determine chemical reaction pathways and to evaluate reaction rate coefficients. Our initial studies concern reactions involved in the dichlorosilane-hydrogen (SiCl2H2--H2) and trichlorosilane-hydrogen (SiCl2H-H2) systems. Reactant, saddle point (transition state), and product geometries and their vibrational harmonic frequencies are determined using the complete-active-space self-consistent-field (CASSCF) electronic structure method with the correlation consistent polarized valence double-zeta basis set (cc-pVDZ). Reaction pathways are constructed by following the imaginary frequency mode of the saddle point to both the reactant and product. Accurate energetics are determined using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations (CCSD(T)) extrapolated to the complete basis set limit. Using the data from the electronic structure calculations, reaction rate coefficients are obtained using conventional and variational transition state and RRKM theories.
Page, Morgan T.; Van Der Elst, Nicholas; Hardebeck, Jeanne L.; Felzer, Karen; Michael, Andrew J.
2016-01-01
Following a large earthquake, seismic hazard can be orders of magnitude higher than the long‐term average as a result of aftershock triggering. Because of this heightened hazard, emergency managers and the public demand rapid, authoritative, and reliable aftershock forecasts. In the past, U.S. Geological Survey (USGS) aftershock forecasts following large global earthquakes have been released on an ad hoc basis with inconsistent methods, and in some cases aftershock parameters adapted from California. To remedy this, the USGS is currently developing an automated aftershock product based on the Reasenberg and Jones (1989) method that will generate more accurate forecasts. To better capture spatial variations in aftershock productivity and decay, we estimate regional aftershock parameters for sequences within the García et al. (2012) tectonic regions. We find that regional variations for mean aftershock productivity reach almost a factor of 10. We also develop a method to account for the time‐dependent magnitude of completeness following large events in the catalog. In addition to estimating average sequence parameters within regions, we develop an inverse method to estimate the intersequence parameter variability. This allows for a more complete quantification of the forecast uncertainties and Bayesian updating of the forecast as sequence‐specific information becomes available.
Application of cause-and-effect analysis to potentiometric titration.
Kufelnicki, A; Lis, S; Meinrath, G
2005-08-01
A first attempt has been made to interpret physicochemical data from potentiometric titration analysis in accordance with the complete measurement-uncertainty budget approach (bottom-up) of ISO and Eurachem. A cause-and-effect diagram is established and discussed. Titration data for arsenazo III are used as a basis for this discussion. The commercial software Superquad is used and applied within a computer-intensive resampling framework. The cause-and-effect diagram is applied to evaluation of seven protonation constants of arsenazo III in the pH range 2-10.7. The data interpretation is based on empirical probability distributions and their analysis by second-order correct confidence estimates. The evaluated data are applied in the calculation of a speciation diagram including uncertainty estimates using the probabilistic speciation software Ljungskile.
Estimation of State Transition Probabilities: A Neural Network Model
NASA Astrophysics Data System (ADS)
Saito, Hiroshi; Takiyama, Ken; Okada, Masato
2015-12-01
Humans and animals can predict future states on the basis of acquired knowledge. This prediction of the state transition is important for choosing the best action, and the prediction is only possible if the state transition probability has already been learned. However, how our brains learn the state transition probability is unknown. Here, we propose a simple algorithm for estimating the state transition probability by utilizing the state prediction error. We analytically and numerically confirmed that our algorithm is able to learn the probability completely with an appropriate learning rate. Furthermore, our learning rule reproduced experimentally reported psychometric functions and neural activities in the lateral intraparietal area in a decision-making task. Thus, our algorithm might describe the manner in which our brains learn state transition probabilities and predict future states.
Age-Dependent Relationships between Prefrontal Cortex Activation and Processing Efficiency
Motes, Michael A.; Biswal, Bharat B.; Rypma, Bart
2012-01-01
fMRI was used in the present study to examine the neural basis for age-related differences in processing efficiency, particularly targeting prefrontal cortex (PFC). During scanning, older and younger participants completed a processing efficiency task in which they determined on each trial whether a symbol-number pair appeared in a simultaneously presented array of nine symbol-number pairs. Estimates of task-related BOLD signal-change were obtained for each participant. These estimates were then correlated with the participants’ performance on the task. For younger participants, BOLD signal-change within PFC decreased with better performance, but for older participants, BOLD signal-change within PFC increased with better performance. The results support the hypothesis that the availability and use of PFC resources mediates age-related changes in processing efficiency. PMID:22792129
Age-Dependent Relationships between Prefrontal Cortex Activation and Processing Efficiency.
Motes, Michael A; Biswal, Bharat B; Rypma, Bart
2011-01-01
fMRI was used in the present study to examine the neural basis for age-related differences in processing efficiency, particularly targeting prefrontal cortex (PFC). During scanning, older and younger participants completed a processing efficiency task in which they determined on each trial whether a symbol-number pair appeared in a simultaneously presented array of nine symbol-number pairs. Estimates of task-related BOLD signal-change were obtained for each participant. These estimates were then correlated with the participants' performance on the task. For younger participants, BOLD signal-change within PFC decreased with better performance, but for older participants, BOLD signal-change within PFC increased with better performance. The results support the hypothesis that the availability and use of PFC resources mediates age-related changes in processing efficiency.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.
Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L
2013-08-13
United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.
Estimating precise metallicity and stellar mass evolution of galaxies
NASA Astrophysics Data System (ADS)
Mosby, Gregory
2018-01-01
The evolution of galaxies can be conveniently broken down into the evolution of their contents. The changing dust, gas, and stellar content in addition to the changing dark matter potential and periodic feedback from a super-massive blackhole are some of the key ingredients. We focus on the stellar content that can be observed, as the stars reflect information about the galaxy when they were formed. We approximate the stellar content and star formation histories of unresolved galaxies using stellar population modeling. Though simplistic, this approach allows us to reconstruct the star formation histories of galaxies that can be used to test models of galaxy formation and evolution. These models, however, suffer from degeneracies at large lookback times (t > 1 Gyr) as red, low luminosity stars begin to dominate a galaxy’s spectrum. Additionally, degeneracies between stellar populations at different ages and metallicities often make stellar population modeling less precise. The machine learning technique diffusion k-means has been shown to increase the precision in stellar population modeling using a mono-metallicity basis set. However, as galaxies evolve, we expect the metallicity of stellar populations to vary. We use diffusion k-means to generate a multi-metallicity basis set to estimate the stellar mass and chemical evolution of unresolved galaxies. Two basis sets are formed from the Bruzual & Charlot 2003 and MILES stellar population models. We then compare the accuracy and precision of these models in recovering complete (stellar mass and metallicity) histories of mock data. Similarities in the groupings of stellar population spectra in the diffusion maps for each metallicity hint at fundamental age transitions common to both basis sets that can be used to identify stellar populations in a given age range.
Gerhardsson, Lars; Balogh, Istvan; Hambert, Per-Arne; Hjortsberg, Ulf; Karlsson, Jan-Erik
2005-01-01
The aim of the present study was to compare the development of vibration white fingers (VWF) in workers in relation to different ways of exposure estimation, and their relationship to the standard ISO 5349, annex A. Nineteen vibration exposed (grinding machines) male workers completed a questionnaire followed by a structured interview including questions regarding their estimated hand-held vibration exposure. Neurophysiological tests such as fractionated nerve conduction velocity in hands and arms, vibrotactile perception thresholds and temperature thresholds were determined. The subjective estimation of the mean daily exposure-time to vibrating tools was 192 min (range 18-480 min) among the workers. The estimated mean exposure time calculated from the consumption of grinding wheels was 42 min (range 18-60 min), approximately a four-fold overestimation (Wilcoxon's signed ranks test, p<0.001). Thus, objective measurements of the exposure time, related to the standard ISO 5349, which in this case were based on the consumption of grinding wheels, will in most cases give a better basis for adequate risk assessment than self-exposure assessment.
Coherence in quantum estimation
NASA Astrophysics Data System (ADS)
Giorda, Paolo; Allegra, Michele
2018-01-01
The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.
Volumetric calculations in an oil field: The basis method
Olea, R.A.; Pawlowsky, V.; Davis, J.C.
1993-01-01
The basis method for estimating oil reserves in place is compared to a traditional procedure that uses ordinary kriging. In the basis method, auxiliary variables that sum to the net thickness of pay are estimated by cokriging. In theory, the procedure should be more powerful because it makes full use of the cross-correlation between variables and forces the original variables to honor interval constraints. However, at least in our case study, the practical advantages of cokriging for estimating oil in place are marginal. ?? 1993.
NASA Astrophysics Data System (ADS)
Morton, F. I.
1983-10-01
Reliable estimates of areal evapotranspiration are essential to significant improvements in the science and practice of hydrology. Direct measurements, such as those provided by lysimeters, eddy flux instrumentation or Bowen-ratio instrumentation, give point values, require constant attendance by skilled personnel and are based on unverified assumptions. A critical review of the methods used for estimating areal evapotranspiration indicates that the conventional conceptual techniques, such as those used in current watershed models, are based on assumptions that are completely divorced from reality; and that causal techniques based on processes and interactions in the soil-plant-atmosphere system are not likely to prove useful for another generation. However, the complementary relationship can do much to fill the gap until such time as causal techniques become practicable because it provides the basis for models that permit areal evapotranspiration to be estimated from its effects on the routine climatological observations needed to estimate potential evapotranspiration. Such models have a realistic conceptual and empirical basis, by-pass the complexity of the soil-plant system and require no local calibration of coefficients. Therefore, they are falsifiable (i.e. can be tested rigorously) so that errors in the associated assumptions and relationships can be detected and corrected by progressive testing over an ever-widening range of environments. Such a methodology uses the entire world as a laboratory and requires that a correction made to obtain agreement between model and river-basin water budget estimates in one environment must be applicable without modification in all other environment. The most recent version of the complementary relationship areal evapotranspiration (CRAE) models is formulated and documented. The reliability of the independent operational estimates of areal evapotranspiration is tested with comparable long-term water-budget estimates for 143 river basins in North America, Africa, Ireland, Australia and New Zealand. The practicality and potential impact of such estimates are demonstrated with examples which show how the availability of such estimates can revitalize the science and practice of hydrology by providing a reliable basis for detailed water-balance studies; for further research on the development of causal models; for hydrological, agricultural and fire hazard forecasts; for detecting the development of errors in hydrometeorological records; for detecting and monitoring the effects of land-use changes; for explaining hydrologic anomalies; and for other better known applications. It is suggested that the collection of the required climatological data by hydrometric agencies could be justified on the grounds that the agencies would gain a technique for quality control and the users would gain by a significant expansion in the information content of the hydrometric data, all at minimal additional expense.
Bouallègue, Fayçal Ben; Crouzet, Jean-François; Comtat, Claude; Fourcade, Marjolaine; Mohammadi, Bijan; Mariano-Goulart, Denis
2007-07-01
This paper presents an extended 3-D exact rebinning formula in the Fourier space that leads to an iterative reprojection algorithm (iterative FOREPROJ), which enables the estimation of unmeasured oblique projection data on the basis of the whole set of measured data. In first approximation, this analytical formula also leads to an extended Fourier rebinning equation that is the basis for an approximate reprojection algorithm (extended FORE). These algorithms were evaluated on numerically simulated 3-D positron emission tomography (PET) data for the solution of the truncation problem, i.e., the estimation of the missing portions in the oblique projection data, before the application of algorithms that require complete projection data such as some rebinning methods (FOREX) or 3-D reconstruction algorithms (3DRP or direct Fourier methods). By taking advantage of all the 3-D data statistics, the iterative FOREPROJ reprojection provides a reliable alternative to the classical FOREPROJ method, which only exploits the low-statistics nonoblique data. It significantly improves the quality of the external reconstructed slices without loss of spatial resolution. As for the approximate extended FORE algorithm, it clearly exhibits limitations due to axial interpolations, but will require clinical studies with more realistic measured data in order to decide on its pertinence.
A comparison of recharge rates in aquifers of the United States based on groundwater-age data
McMahon, P.B.; Plummer, Niel; Böhlke, J.K.; Shapiro, S.D.; Hinkle, S.R.
2011-01-01
An overview is presented of existing groundwater-age data and their implications for assessing rates and timescales of recharge in selected unconfined aquifer systems of the United States. Apparent age distributions in aquifers determined from chlorofluorocarbon, sulfur hexafluoride, tritium/helium-3, and radiocarbon measurements from 565 wells in 45 networks were used to calculate groundwater recharge rates. Timescales of recharge were defined by 1,873 distributed tritium measurements and 102 radiocarbon measurements from 27 well networks. Recharge rates ranged from < 10 to 1,200 mm/yr in selected aquifers on the basis of measured vertical age distributions and assuming exponential age gradients. On a regional basis, recharge rates based on tracers of young groundwater exhibited a significant inverse correlation with mean annual air temperature and a significant positive correlation with mean annual precipitation. Comparison of recharge derived from groundwater ages with recharge derived from stream base-flow evaluation showed similar overall patterns but substantial local differences. Results from this compilation demonstrate that age-based recharge estimates can provide useful insights into spatial and temporal variability in recharge at a national scale and factors controlling that variability. Local age-based recharge estimates provide empirical data and process information that are needed for testing and improving more spatially complete model-based methods.
A HIGH COVERAGE GENOME SEQUENCE FROM AN ARCHAIC DENISOVAN INDIVIDUAL
Meyer, Matthias; Kircher, Martin; Gansauge, Marie-Theres; Li, Heng; Racimo, Fernando; Mallick, Swapan; Schraiber, Joshua G.; Jay, Flora; Prüfer, Kay; de Filippo, Cesare; Sudmant, Peter H.; Alkan, Can; Fu, Qiaomei; Do, Ron; Rohland, Nadin; Tandon, Arti; Siebauer, Michael; Green, Richard E.; Bryc, Katarzyna; Briggs, Adrian W.; Stenzel, Udo; Dabney, Jesse; Shendure, Jay; Kitzman, Jacob; Hammer, Michael F.; Shunkov, Michael V.; Derevianko, Anatoli P.; Patterson, Nick; Andrés, Aida M.; Eichler, Evan E.; Slatkin, Montgomery; Reich, David; Kelso, Janet; Pääbo, Svante
2013-01-01
We present a DNA library preparation method that has allowed us to reconstruct a high coverage (30X) genome sequence of a Denisovan, an extinct relative of Neandertals. The quality of this genome allows a direct estimation of Denisovan heterozygosity indicating that genetic diversity in these archaic hominins was extremely low. It also allows tentative dating of the specimen on the basis of “missing evolution” in its genome, detailed measurements of Denisovan and Neandertal admixture into present-day human populations, and the generation of a near-complete catalog of genetic changes that swept to high frequency in modern humans since their divergence from Denisovans. PMID:22936568
NASA Astrophysics Data System (ADS)
Tytell, Eric D.
2007-11-01
Engineers and biologists have long desired to compare propulsive performance for fishes and underwater vehicles of different sizes, shapes, and modes of propulsion. Ideally, such a comparison would be made on the basis of either propulsive efficiency, total power output or both. However, estimating the efficiency and power output of self-propelled bodies, and particularly fishes, is methodologically challenging because it requires an estimate of thrust. For such systems traveling at a constant velocity, thrust and drag are equal, and can rarely be separated on the basis of flow measured in the wake. This problem is demonstrated using flow fields from swimming American eels, Anguilla rostrata, measured using particle image velocimetry (PIV) and high-speed video. Eels balance thrust and drag quite evenly, resulting in virtually no wake momentum in the swimming (axial) direction. On average, their wakes resemble those of self-propelled jet propulsors, which have been studied extensively. Theoretical studies of such wakes may provide methods for the estimation of thrust separately from drag. These flow fields are compared with those measured in the wakes of rainbow trout, Oncorhynchus mykiss, and bluegill sunfish, Lepomis macrochirus. In contrast to eels, these fishes produce wakes with axial momentum. Although the net momentum flux must be zero on average, it is neither spatially nor temporally homogeneous; the heterogeneity may provide an alternative route for estimating thrust. This review shows examples of wakes and velocity profiles from the three fishes, indicating challenges in estimating efficiency and power output and suggesting several routes for further experiments. Because these estimates will be complicated, a much simpler method for comparing performance is outlined, using as a point of comparison the power lost producing the wake. This wake power, a component of the efficiency and total power, can be estimated in a straightforward way from the flow fields. Although it does not provide complete information about the performance, it can be used to place constraints on the relative efficiency and cost of transport for the fishes.
NASA Astrophysics Data System (ADS)
Tytell, Eric D.
Engineers and biologists have long desired to compare propulsive performance for fishes and underwater vehicles of different sizes, shapes, and modes of propulsion. Ideally, such a comparison would be made on the basis of either propulsive efficiency, total power output or both. However, estimating the efficiency and power output of self-propelled bodies, and particularly fishes, is methodologically challenging because it requires an estimate of thrust. For such systems traveling at a constant velocity, thrust and drag are equal, and can rarely be separated on the basis of flow measured in the wake. This problem is demonstrated using flow fields from swimming American eels, Anguilla rostrata, measured using particle image velocimetry (PIV) and high-speed video. Eels balance thrust and drag quite evenly, resulting in virtually no wake momentum in the swimming (axial) direction. On average, their wakes resemble those of self-propelled jet propulsors, which have been studied extensively. Theoretical studies of such wakes may provide methods for the estimation of thrust separately from drag. These flow fields are compared with those measured in the wakes of rainbow trout, Oncorhynchus mykiss, and bluegill sunfish, Lepomis macrochirus. In contrast to eels, these fishes produce wakes with axial momentum. Although the net momentum flux must be zero on average, it is neither spatially nor temporally homogeneous; the heterogeneity may provide an alternative route for estimating thrust. This review shows examples of wakes and velocity profiles from the three fishes, indicating challenges in estimating efficiency and power output and suggesting several routes for further experiments. Because these estimates will be complicated, a much simpler method for comparing performance is outlined, using as a point of comparison the power lost producing the wake. This wake power, a component of the efficiency and total power, can be estimated in a straightforward way from the flow fields. Although it does not provide complete information about the performance, it can be used to place constraints on the relative efficiency and cost of transport for the fishes.
Cratering time scales for the Galilean satellites
NASA Technical Reports Server (NTRS)
Shoemaker, E. M.; Wolfe, R. F.
1982-01-01
An attempt is made to estimate the present cratering rate for each Galilean satellite within the correct order of magnitude and to extend the cratering rates back into the geologic past on the basis of evidence from the earth-moon system. For collisions with long and short period comets, the magnitudes and size distributions of the comet nuclei, the distribution of their perihelion distances, and the completeness of discovery are addressed. The diameters and masses of cometary nuclei are assessed, as are crater diameters and cratering rates. The dynamical relations between long period and short period comets are discussed, and the population of Jupiter-crossing asteroids is assessed. Estimated present cratering rates on the Galilean satellites are compared and variations of cratering rate with time are considered. Finally, the consistency of derived cratering time scales with the cratering record of the icy Galilean satellites is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, X. G.; Ning, C. G.; Zhang, S. F.
The measurements of electron density distributions and binding-energy spectrum of the complete valence shell of cyclopentene (C{sub 5}H{sub 8}) using a binary (e,2e) electron momentum spectrometer are reported. The experimental momentum profiles of the valence orbitals are compared with the theoretical distributions calculated using Hartree-Fock and density-functional-theory (DFT) methods with various basis sets. The agreement between theory and experiment for the shape and intensity of the orbital electron momentum distributions is generally good. The DFT calculations employing B3LYP hybrid functional with a saturated and diffuse AUG-CC-PVTZ basis set provide the better descriptions of the experimental data. Some ''turn up'' effectsmore » in the low momentum region of the measured (e,2e) cross section compared with the calculations of 3a{sup ''}, 2a{sup ''}, and 3a{sup '} orbitals could be mainly attributed to distorted-wave effects. The pole strengths of the main ionization peaks from the orbitals in the inner valence are estimated.« less
Computed potential energy surfaces for chemical reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.
1988-01-01
The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rossi, Tuomas P., E-mail: tuomas.rossi@alumni.aalto.fi; Sakko, Arto; Puska, Martti J.
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate thatmore » the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.« less
Exploring the floristic diversity of tropical Africa.
Sosef, Marc S M; Dauby, Gilles; Blach-Overgaard, Anne; van der Burgt, Xander; Catarino, Luís; Damen, Theo; Deblauwe, Vincent; Dessein, Steven; Dransfield, John; Droissart, Vincent; Duarte, Maria Cristina; Engledow, Henry; Fadeur, Geoffrey; Figueira, Rui; Gereau, Roy E; Hardy, Olivier J; Harris, David J; de Heij, Janneke; Janssens, Steven; Klomberg, Yannick; Ley, Alexandra C; Mackinder, Barbara A; Meerts, Pierre; van de Poel, Jeike L; Sonké, Bonaventure; Stévart, Tariq; Stoffelen, Piet; Svenning, Jens-Christian; Sepulchre, Pierre; Zaiss, Rainer; Wieringa, Jan J; Couvreur, Thomas L P
2017-03-07
Understanding the patterns of biodiversity distribution and what influences them is a fundamental pre-requisite for effective conservation and sustainable utilisation of biodiversity. Such knowledge is increasingly urgent as biodiversity responds to the ongoing effects of global climate change. Nowhere is this more acute than in species-rich tropical Africa, where so little is known about plant diversity and its distribution. In this paper, we use RAINBIO - one of the largest mega-databases of tropical African vascular plant species distributions ever compiled - to address questions about plant and growth form diversity across tropical Africa. The filtered RAINBIO dataset contains 609,776 georeferenced records representing 22,577 species. Growth form data are recorded for 97% of all species. Records are well distributed, but heterogeneous across the continent. Overall, tropical Africa remains poorly sampled. When using sampling units (SU) of 0.5°, just 21 reach appropriate collection density and sampling completeness, and the average number of records per species per SU is only 1.84. Species richness (observed and estimated) and endemism figures per country are provided. Benin, Cameroon, Gabon, Ivory Coast and Liberia appear as the botanically best-explored countries, but none are optimally explored. Forests in the region contain 15,387 vascular plant species, of which 3013 are trees, representing 5-7% of the estimated world's tropical tree flora. The central African forests have the highest endemism rate across Africa, with approximately 30% of species being endemic. The botanical exploration of tropical Africa is far from complete, underlining the need for intensified inventories and digitization. We propose priority target areas for future sampling efforts, mainly focused on Tanzania, Atlantic Central Africa and West Africa. The observed number of tree species for African forests is smaller than those estimated from global tree data, suggesting that a significant number of species are yet to be discovered. Our data provide a solid basis for a more sustainable management and improved conservation of tropical Africa's unique flora, and is important for achieving Objective 1 of the Global Strategy for Plant Conservation 2011-2020. In turn, RAINBIO provides a solid basis for a more sustainable management and improved conservation of tropical Africa's unique flora.
Goldstein, Lizabeth A; Connolly Gibbons, Mary Beth; Thompson, Sarah M; Scott, Kelli; Heintz, Laura; Green, Patricia; Thompson, Donald; Crits-Christoph, Paul
2011-07-01
Computerized administration of mental health-related questionnaires has become relatively common, but little research has explored this mode of assessment in "real-world" settings. In the current study, 200 consumers at a community mental health center completed the BASIS-24 via handheld computer as well as paper and pen. Scores on the computerized BASIS-24 were compared with scores on the paper BASIS-24. Consumers also completed a questionnaire which assessed their level of satisfaction with the computerized BASIS-24. Results indicated that the BASIS-24 administered via handheld computer was highly correlated with pen and paper administration of the measure and was generally acceptable to consumers. Administration of the BASIS-24 via handheld computer may allow for efficient and sustainable outcomes assessment, adaptable research infrastructure, and maximization of clinical impact in community mental health agencies.
Izbicki, John A.; Groover, Krishangi D.
2018-03-22
This report describes (1) work done between January 2015 and May 2017 as part of the U.S. Geological Survey (USGS) hexavalent chromium, Cr(VI), background study and (2) the summative-scale approach to be used to estimate the extent of anthropogenic (man-made) Cr(VI) and background Cr(VI) concentrations near the Pacific Gas and Electric Company (PG&E) natural gas compressor station in Hinkley, California. Most of the field work for the study was completed by May 2017. The summative-scale approach and calculation of Cr(VI) background were not well-defined at the time the USGS proposal for the background Cr(VI) study was prepared but have since been refined as a result of data collected as part of this study. The proposed summative scale consists of multiple items, formulated as questions to be answered at each sampled well. Questions that compose the summative scale were developed to address geologic, hydrologic, and geochemical constraints on Cr(VI) within the study area. Each question requires a binary (yes or no) answer. A score of 1 will be assigned for an answer that represents data consistent with anthropogenic Cr(VI); a score of –1 will be assigned for an answer that represents data inconsistent with anthropogenic Cr(VI). The areal extent of anthropogenic Cr(VI) estimated from the summative-scale analyses will be compared with the areal extent of anthropogenic Cr(VI) estimated on the basis of numerical groundwater flow model results, along with particle-tracking analyses. On the basis of these combined results, background Cr(VI) values will be estimated for “Mojave-type” deposits, and other deposits, in different parts of the study area outside the summative-scale mapped extent of anthropogenic Cr(VI).
NASA Astrophysics Data System (ADS)
Petersson, George A.; Malick, David K.; Frisch, Michael J.; Braunstein, Matthew
2006-07-01
Examination of the convergence of full valence complete active space self-consistent-field configuration interaction including all single and double excitation (CASSCF-CISD) energies with expansion of the one-electron basis set reveals a pattern very similar to the convergence of single determinant energies. Calculations on the lowest four singlet states and the lowest four triplet states of N2 with the sequence of n-tuple-ζ augmented polarized (nZaP) basis sets (n =2, 3, 4, 5, and 6) are used to establish the complete basis set limits. Full configuration-interaction (CI) and core electron contributions must be included for very accurate potential energy surfaces. However, a simple extrapolation scheme that has no adjustable parameters and requires nothing more demanding than CAS(10e -,8orb)-CISD/3ZaP calculations gives the Re, ωe, ωeXe, Te, and De for these eight states with rms errors of 0.0006Å, 4.43cm-1, 0.35cm-1, 0.063eV, and 0.018eV, respectively.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
van Mourik, Tanja
1999-02-01
The potential energy curves of the rare gas dimers He2, Ne2, and Ar2 have been computed using correlation consistent basis sets ranging from singly augmented aug-cc-pVDZ sets through triply augmented t-aug-cc-pV6Z sets, with the augmented sextuple basis sets being reported herein. Several methods for including electron correlation were investigated, namely Moller-Plesset perturbation theory (MP2, MP3 and MP4) and coupled cluster theory [CCSD and CCSD(T)]. For He2CCSD(T)/d-aug-cc-pV6Z calculations yield a well depth of 7.35cm-1 (10.58K), with an estimated complete basis set (CBS) limit of 7.40cm-1 (10.65K). The latter is smaller than the 'exact' well depth (Aziz, R. A., Janzen, A. R., and Moldover, M. R., 1995, Phys. Rev. Lett., 74, 1586) by about 0.2cm-1 (0.35K). The Ne well depth, computed with the CCSD(T)/d-aug-cc-pV6Z method, is 28.31cm-1 and the estimated CBS limit is 28.4cm-1, approximately 1cm-1 smaller than the empirical potential of Aziz, R. A., and Slaman, M., J., 1989, Chem. Phys., 130, 187. Inclusion of core and core-valence correlation effects has a negligible effect on the Ne well depth, decreasing it by only 0.04cm-1. For Ar2, CCSD(T)/ d-aug-cc-pV6Z calculations yield a well depth of 96.2cm-1. The corresponding HFDID potential of Aziz, R. A., 1993, J. chem. Phys., 99, 4518 predicts of D of 99.7cm-1. Inclusion of core and core-valence effects in Ar increases the well depth and decreases the discrepancy by approximately 1cm-1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jäger, Benjamin, E-mail: benjamin.jaeger@uni-rostock.de; Hellmann, Robert, E-mail: robert.hellmann@uni-rostock.de; Bich, Eckard
2016-03-21
A new reference krypton-krypton interatomic potential energy curve was developed by means of quantum-chemical ab initio calculations for 36 interatomic separations. Highly accurate values for the interaction energies at the complete basis set limit were obtained using the coupled-cluster method with single, double, and perturbative triple excitations as well as t-aug-cc-pV5Z and t-aug-cc-pV6Z basis sets including mid-bond functions, with the 6Z basis set being newly constructed for this study. Higher orders of coupled-cluster terms were considered in a successive scheme up to full quadruple excitations. Core-core and core-valence correlation effects were included. Furthermore, relativistic effects were studied not only atmore » a scalar relativistic level using second-order direct perturbation theory, but also utilizing full four-component and Gaunt-effect computations. An analytical pair potential function was fitted to the interaction energies, which is characterized by a depth of 200.88 K with an estimated standard uncertainty of 0.51 K. Thermophysical properties of low-density krypton were calculated for temperatures up to 5000 K. Second and third virial coefficients were obtained from statistical thermodynamics. Viscosity and thermal conductivity as well as the self-diffusion coefficient were computed using the kinetic theory of gases. The theoretical results are compared with experimental data and with results for other pair potential functions from the literature, especially with those calculated from the recently developed ab initio potential of Waldrop et al. [J. Chem. Phys. 142, 204307 (2015)]. Highly accurate experimental viscosity data indicate that both the present ab initio pair potential and the one of Waldrop et al. can be regarded as reference potentials, even though the quantum-chemical methods and basis sets differ. However, the uncertainties of the present potential and of the derived properties are estimated to be considerably lower.« less
Jäger, Benjamin; Hellmann, Robert; Bich, Eckard; Vogel, Eckhard
2016-03-21
A new reference krypton-krypton interatomic potential energy curve was developed by means of quantum-chemical ab initio calculations for 36 interatomic separations. Highly accurate values for the interaction energies at the complete basis set limit were obtained using the coupled-cluster method with single, double, and perturbative triple excitations as well as t-aug-cc-pV5Z and t-aug-cc-pV6Z basis sets including mid-bond functions, with the 6Z basis set being newly constructed for this study. Higher orders of coupled-cluster terms were considered in a successive scheme up to full quadruple excitations. Core-core and core-valence correlation effects were included. Furthermore, relativistic effects were studied not only at a scalar relativistic level using second-order direct perturbation theory, but also utilizing full four-component and Gaunt-effect computations. An analytical pair potential function was fitted to the interaction energies, which is characterized by a depth of 200.88 K with an estimated standard uncertainty of 0.51 K. Thermophysical properties of low-density krypton were calculated for temperatures up to 5000 K. Second and third virial coefficients were obtained from statistical thermodynamics. Viscosity and thermal conductivity as well as the self-diffusion coefficient were computed using the kinetic theory of gases. The theoretical results are compared with experimental data and with results for other pair potential functions from the literature, especially with those calculated from the recently developed ab initio potential of Waldrop et al. [J. Chem. Phys. 142, 204307 (2015)]. Highly accurate experimental viscosity data indicate that both the present ab initio pair potential and the one of Waldrop et al. can be regarded as reference potentials, even though the quantum-chemical methods and basis sets differ. However, the uncertainties of the present potential and of the derived properties are estimated to be considerably lower.
2008-01-01
Objective To provide an accurate estimate of violent war deaths. Design Analysis of survey data on mortality, adjusted for sampling bias and censoring, from nationally representative surveys designed to measure population health. Estimated deaths compared with estimates in database of passive reports. Setting 2002-3 World health surveys, in which information was collected from one respondent per household about sibling deaths, including whether such deaths resulted from war injuries. Main outcome measure Estimated deaths from war injuries in 13 countries over 50 years. Results From 1955 to 2002, data from the surveys indicated an estimated 5.4 million violent war deaths (95% confidence interval 3.0 to 8.7 million) in 13 countries, ranging from 7000 in the Democratic Republic of Congo to 3.8 million in Vietnam. From 1995 to 2002 survey data indicate 36 000 war deaths annually (16 000 to 71 000) in the 13 countries studied. Data from passive surveillance, however, indicated a figure of only a third of this. On the basis of the relation between world health survey data and passive reports, we estimate 378 000 globalwar deaths annually from 1985-94, the last years for which complete passive surveillance data were available. Conclusions The use of data on sibling history from peacetime population surveys can retrospectively estimate mortality from war. War causes more deaths than previously estimated, and there is no evidence to support a recent decline in war deaths. PMID:18566045
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So; Yanai, Takeshi; De Jong, Wibe A.
Coupled-cluster methods including through and up to the connected single, double, triple, and quadruple substitutions (CCSD, CCSDT, and CCSDTQ) have been automatically derived and implemented for sequential and parallel executions for use in conjunction with a one-component third-order Douglas-Kroll (DK3) approximation for relativistic corrections. A combination of the converging electron-correlation methods, the accurate relativistic reference wave functions, and the use of systematic basis sets tailored to the relativistic approximation has been shown to predict the experimental singlet-triplet separations within 0.02 eV (0.5 kcal/mol) for five triatomic hydrides (CH2, NH2+, SiH2, PH2+, and AsH2+), the experimental bond lengths within 0.002 angstroms,more » rotational constants within 0.02 cm-1, vibration-rotation constants within 0.01 cm-1, centrifugal distortion constants within 2 %, harmonic vibration frequencies within 9 cm-1 (0.4 %), anharmonic vibrational constants within 2 cm-1, and dissociation energies within 0.03 eV (0.8 kcal/mol) for twenty diatomic hydrides (BH, CH, NH, OH, FH, AlH, SiH, PH, SH, ClH, GaH, GeH, AsH, SeH, BrH, InH, SnH, SbH, TeH, and IH) containing main-group elements across the second through fifth periods of the periodic table. In these calculations, spin-orbit effects on dissociation energies, which were assumed to be additive, were estimated from the measured spin-orbit coupling constants of atoms and diatomic molecules, and an electronic energy in the complete-basis-set, complete-electron-correlation limit has been extrapolated by the formula which was in turn based on the exponential-Gaussian extrapolation formula of the basis set dependence.« less
Explicit hydration of ammonium ion by correlated methods employing molecular tailoring approach
NASA Astrophysics Data System (ADS)
Singh, Gurmeet; Verma, Rahul; Wagle, Swapnil; Gadre, Shridhar R.
2017-11-01
Explicit hydration studies of ions require accurate estimation of interaction energies. This work explores the explicit hydration of the ammonium ion (NH4+) employing Møller-Plesset second order (MP2) perturbation theory, an accurate yet relatively less expensive correlated method. Several initial geometries of NH4+(H2O)n (n = 4 to 13) clusters are subjected to MP2 level geometry optimisation with correlation consistent aug-cc-pVDZ (aVDZ) basis set. For large clusters (viz. n > 8), molecular tailoring approach (MTA) is used for single point energy evaluation at MP2/aVTZ level for the estimation of MP2 level binding energies (BEs) at complete basis set (CBS) limit. The minimal nature of the clusters upto n ≤ 8 is confirmed by performing vibrational frequency calculations at MP2/aVDZ level of theory, whereas for larger clusters (9 ≤ n ≤ 13) such calculations are effected via grafted MTA (GMTA) method. The zero point energy (ZPE) corrections are done for all the isomers lying within 1 kcal/mol of the lowest energy one. The resulting frequencies in N-H region (2900-3500 cm-1) and in O-H stretching region (3300-3900 cm-1) are in found to be in excellent agreement with the available experimental findings for 4 ≤ n ≤ 13. Furthermore, GMTA is also applied for calculating the BEs of these clusters at coupled cluster singles and doubles with perturbative triples (CCSD(T)) level of theory with aVDZ basis set. This work thus represents an art of the possible on contemporary multi-core computers for studying explicit molecular hydration at correlated level theories.
Dixit, Anant; Claudot, Julien; Lebègue, Sébastien; Rocca, Dario
2017-06-07
By using a formulation based on the dynamical polarizability, we propose a novel implementation of second-order Møller-Plesset perturbation (MP2) theory within a plane wave (PW) basis set. Because of the intrinsic properties of PWs, this method is not affected by basis set superposition errors. Additionally, results are converged without relying on complete basis set extrapolation techniques; this is achieved by using the eigenvectors of the static polarizability as an auxiliary basis set to compactly and accurately represent the response functions involved in the MP2 equations. Summations over the large number of virtual states are avoided by using a formalism inspired by density functional perturbation theory, and the Lanczos algorithm is used to include dynamical effects. To demonstrate this method, applications to three weakly interacting dimers are presented.
Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B
2017-01-07
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
NASA Astrophysics Data System (ADS)
Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.
2017-01-01
Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.
Racimo, Allison R; Talathi, Nakul S; Zelenski, Nicole A; Wells, Lawrence; Shah, Apurva S
2018-05-02
Price transparency allows patients to make value-based health care decisions and is particularly important for individuals who are uninsured or enrolled in high-deductible health care plans. The availability of consumer prices for children undergoing orthopaedic surgery has not been previously investigated. We aimed to determine the availability of price estimates from hospitals in the United States for an archetypal pediatric orthopaedic surgical procedure (closed reduction and percutaneous pinning of a distal radius fracture) and identify variations in price estimates across hospitals. This prospective investigation utilized a scripted telephone call to obtain price estimates from 50 "top-ranked hospitals" for pediatric orthopaedics and 1 "non-top-ranked hospital" from each state and the District of Columbia. Price estimates were requested using a standardized script, in which an investigator posed as the mother of a child with a displaced distal radius fracture that needed closed reduction and pinning. Price estimates (complete or partial) were recorded for each hospital. The number of calls and the duration of time required to obtain the pricing information was also recorded. Variation was assessed, and hospitals were compared on the basis of ranking, teaching status, and region. Less than half (44%) of the 101 hospitals provided a complete price estimate. The mean price estimate for top-ranked hospitals ($17,813; range, $2742 to $49,063) was 50% higher than the price estimate for non-top-ranked hospitals ($11,866; range, $3623 to $22,967) (P=0.020). Differences in price estimates were attributable to differences in hospital fees (P=0.003), not surgeon fees. Top-ranked hospitals required more calls than non-top-ranked hospitals (4.4±2.9 vs. 2.8±2.3 calls, P=0.003). A longer duration of time was required to obtain price estimates from top-ranked hospitals than from non-top-ranked hospitals (8.2±9.4 vs. 4.1±5.1 d, P=0.024). Price estimates for pediatric orthopaedic procedures are difficult to obtain. Top-ranked hospitals are more expensive and less likely to provide price information than non-top-ranked hospitals, with price differences primarily caused by variation in hospital fees, not surgeon fees. Level II-economic and decision analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-06-21
We report MP2 and CCSD(T) binding energies with basis sets up to pentuple zeta quality for the m = 2-6, 8 clusters. Or best CCSD(T)/CBS estimates are -4.99 kcal/mol (dimer), -15.77 kcal/mol (trimer), -27.39 kcal/mol (tetramer), -35.9 ± 0.3 kcal/mol (pentamer), -46.2 ± 0.3 kcal/mol (prism hexamer), -45.9 ± 0.3 kcal/mol (cage hexamer), -45.4 ± 0.3 kcal/mol (book hexamer), -44.3 ± 0.3 kcal/mol (ring hexamer), -73.0 ± 0.5 kcal/mol (D 2d octamer) and -72.9 ± 0.5 kcal/mol (S4 octamer). We have found that the percentage of both the uncorrected (dimer) and BSSE-corrected (dimer CP e) binding energies recovered with respectmore » to the CBS limit falls into a narrow range for each basis set for all clusters and in addition this range was found to decrease upon increasing the basis set. Relatively accurate estimates (within < 0.5%) of the CBS limits can be obtained when using the “ 2/3, 1/3” (for the AVDZ set) or the “½ , ½” (for the AVTZ, AVQZ and AV5Z sets) mixing ratio between dimer e and dimer CPe. Based on those findings we propose an accurate and efficient computational protocol that can be used to estimate accurate binding energies of clusters at the MP2 (for up to 100 molecules) and CCSD(T) (for up to 30 molecules) levels of theory. This work was supported by the US Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multi program national laboratory operated for DOE by Battelle. This research also used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. AC02-05CH11231.« less
Estimates of ground-water recharge based on streamflow-hydrograph methods: Pennsylvania
Risser, Dennis W.; Conger, Randall W.; Ulrich, James E.; Asmussen, Michael P.
2005-01-01
This study, completed by the U.S. Geological Survey (USGS) in cooperation with the Pennsylvania Department of Conservation and Natural Resources, Bureau of Topographic and Geologic Survey (T&GS), provides estimates of ground-water recharge for watersheds throughout Pennsylvania computed by use of two automated streamflow-hydrograph-analysis methods--PART and RORA. The PART computer program uses a hydrograph-separation technique to divide the streamflow hydrograph into components of direct runoff and base flow. Base flow can be a useful approximation of recharge if losses and interbasin transfers of ground water are minimal. The RORA computer program uses a recession-curve displacement technique to estimate ground-water recharge from each storm period indicated on the streamflow hydrograph. Recharge estimates were made using streamflow records collected during 1885-2001 from 197 active and inactive streamflow-gaging stations in Pennsylvania where streamflow is relatively unaffected by regulation. Estimates of mean-annual recharge in Pennsylvania computed by the use of PART ranged from 5.8 to 26.6 inches; estimates from RORA ranged from 7.7 to 29.3 inches. Estimates from the RORA program were about 2 inches greater than those derived from the PART program. Mean-monthly recharge was computed from the RORA program and was reported as a percentage of mean-annual recharge. On the basis of this analysis, the major ground-water recharge period in Pennsylvania typically is November through May; the greatest monthly recharge typically occurs in March.
Weight and cost forecasting for advanced manned space vehicles
NASA Technical Reports Server (NTRS)
Williams, Raymond
1989-01-01
A mass and cost estimating computerized methology for predicting advanced manned space vehicle weights and costs was developed. The user friendly methology designated MERCER (Mass Estimating Relationship/Cost Estimating Relationship) organizes the predictive process according to major vehicle subsystem levels. Design, development, test, evaluation, and flight hardware cost forecasting is treated by the study. This methodology consists of a complete set of mass estimating relationships (MERs) which serve as the control components for the model and cost estimating relationships (CERs) which use MER output as input. To develop this model, numerous MER and CER studies were surveyed and modified where required. Additionally, relationships were regressed from raw data to accommodate the methology. The models and formulations which estimated the cost of historical vehicles to within 20 percent of the actual cost were selected. The result of the research, along with components of the MERCER Program, are reported. On the basis of the analysis, the following conclusions were established: (1) The cost of a spacecraft is best estimated by summing the cost of individual subsystems; (2) No one cost equation can be used for forecasting the cost of all spacecraft; (3) Spacecraft cost is highly correlated with its mass; (4) No study surveyed contained sufficient formulations to autonomously forecast the cost and weight of the entire advanced manned vehicle spacecraft program; (5) No user friendly program was found that linked MERs with CERs to produce spacecraft cost; and (6) The group accumulation weight estimation method (summing the estimated weights of the various subsystems) proved to be a useful method for finding total weight and cost of a spacecraft.
NASA Astrophysics Data System (ADS)
Stanton, R. W.; Burruss, R. C.; Flores, R. M.; Warwick, P. D.
2001-05-01
Subsurface environments for geologic storage of CO2 from combustion of fossil fuel include saline formations, depleted oil and gas reservoirs, and unmineable coalbeds. Of these environments, storage in petroleum reservoirs and coal beds offers a potential economic benefit of enhanced recovery of energy resources. Meaningful assessment of the volume and geographic distribution of storage sites requires quantitative estimates of geologic factors that control storage capacity. The factors that control the storage capacity of unmineable coalbeds are poorly understood. In preparation for a USGS assessment of CO2 storage capacity we have begun new measurements of CO2 and CH4 adsorption isotherms of low-rank coal samples from 4 basins. Initial results for 13 samples of low-rank coal beds from the Powder River Basin (9 subbituminous coals), Greater Green River Basin (1 subbituminous coal), Williston Basin (2 lignites) and the Gulf Coast (1 lignite) indicate that their adsorption capacity is up to 10 times higher than it is for CH4. These values contrast with published measurements of the CO2 adsorption capacity of bituminous coals from the Fruitland Formation, San Juan basin, and Gates Formation, British Columbia, that indicate about twice as much carbon dioxide as methane can be adsorbed on coals. Because CH4 adsorption isotherms are commonly measured on coals, CO2 adsorption capacity can be estimated if thecorrect relationship between the gases is known. However, use a factor to predict CO2 adsorption that is twice that of CH4 adsorption, which is common in the published literature, grossly underestimates the storage capacity of widely distributed, thick low-rank coal beds. Complete petrographic and chemical characterization of these low-rank coal samples is in progress. Significant variations in adsorption measurements among samples are depicted depending on the reporting basis used. Properties were measured on an "as received" (moist) basis but can be converted to a dry basis, ash-free basis (moist), or dry ash-free basis to emphasize the property having the greatest effect on the adsorption isotherm. Initial results show that moisture content has a strong effect on CO2 adsorption. Our current sample base covers a limited range of coal rank and composition. Full characterization of the storage capacity of coalbeds in the US will require additional samples that cover a broader range of coal compositions, ranks, and depositional environments. Even at this preliminary stage, we can use results from the recent USGS assessment of the Powder River Basin (Wyoming and Montana) to examine the impact of these new measurements on estimates of storage capacity. At depths greater than 500 feet, the Wyodak-Anderson coal zone contains 360 billion metric tons of coal. Using the new measurements of CO2 storage capacity, this coal zone could, theoretically, sequester about 290 trillion cubic feet (TCF) of CO2. This estimate contrasts sharply with an estimated capacity of 70 TCF based on the published values for bituminous coals.
NASA Astrophysics Data System (ADS)
Di Giacomo, Domenico; Bondár, István; Storchak, Dmitry A.; Engdahl, E. Robert; Bormann, Peter; Harris, James
2015-02-01
This paper outlines the re-computation and compilation of the magnitudes now contained in the final ISC-GEM Reference Global Instrumental Earthquake Catalogue (1900-2009). The catalogue is available via the ISC website (http://www.isc.ac.uk/iscgem/). The available re-computed MS and mb provided an ideal basis for deriving new conversion relationships to moment magnitude MW. Therefore, rather than using previously published regression models, we derived new empirical relationships using both generalized orthogonal linear and exponential non-linear models to obtain MW proxies from MS and mb. The new models were tested against true values of MW, and the newly derived exponential models were then preferred to the linear ones in computing MW proxies. For the final magnitude composition of the ISC-GEM catalogue, we preferred directly measured MW values as published by the Global CMT project for the period 1976-2009 (plus intermediate-depth earthquakes between 1962 and 1975). In addition, over 1000 publications have been examined to obtain direct seismic moment M0 and, therefore, also MW estimates for 967 large earthquakes during 1900-1978 (Lee and Engdahl, 2015) by various alternative methods to the current GCMT procedure. In all other instances we computed MW proxy values by converting our re-computed MS and mb values into MW, using the newly derived non-linear regression models. The final magnitude composition is an improvement in terms of magnitude homogeneity compared to previous catalogues. The magnitude completeness is not homogeneous over the 110 years covered by the ISC-GEM catalogue. Therefore, seismicity rate estimates may be strongly affected without a careful time window selection. In particular, the ISC-GEM catalogue appears to be complete down to MW 5.6 starting from 1964, whereas for the early instrumental period the completeness varies from ∼7.5 to 6.2. Further time and resources would be necessary to homogenize the magnitude of completeness over the entire catalogue length.
Highly correlated configuration interaction calculations on water with large orbital bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almora-Díaz, César X., E-mail: xalmora@fisica.unam.mx
2014-05-14
A priori selected configuration interaction (SCI) with truncation energy error [C. F. Bunge, J. Chem. Phys. 125, 014107 (2006)] and CI by parts [C. F. Bunge and R. Carbó-Dorca, J. Chem. Phys. 125, 014108 (2006)] are used to approximate the total nonrelativistic electronic ground state energy of water at fixed experimental geometry with CI up to sextuple excitations. Correlation-consistent polarized core-valence basis sets (cc-pCVnZ) up to sextuple zeta and augmented correlation-consistent polarized core-valence basis sets (aug-cc-pCVnZ) up to quintuple zeta quality are employed. Truncation energy errors range between less than 1 μhartree, and 100 μhartree for the largest orbital set. Coupledmore » cluster CCSD and CCSD(T) calculations are also obtained for comparison. Our best upper bound, −76.4343 hartree, obtained by SCI with up to sextuple excitations with a cc-pCV6Z basis recovers more than 98.8% of the correlation energy of the system, and it is only about 3 kcal/mol above the “experimental” value. Despite that the present energy upper bounds are far below all previous ones, comparatively large dispersion errors in the determination of the extrapolated energies to the complete basis set do not allow to determine a reliable estimation of the full CI energy with an accuracy better than 0.6 mhartree (0.4 kcal/mol)« less
Optical Studies of Orbital Debris at GEO Using Two Telescopes
NASA Technical Reports Server (NTRS)
Seitzer, P.; Abercromby, K. J.; Rodriquez,H. M.; Barker, E.
2008-01-01
Beginning in March, 2007, optical observations of debris at geosynchronous orbit (GEO) were commenced using two telescopes simultaneously at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. The University of Michigan's 0.6/0.9-m Schmidt telescope MODEST (for Michigan Orbital DEbris Survey Telescope) was used in survey mode to find objects that potentially could be at GEO. Because GEO objects only appear in this telescope's field of view for an average of 5 minutes, a full six-parameter orbit can not be determined. Interrupting the survey for follow-up observations leads to incompleteness in the survey results. Instead, as objects are detected on MODEST, initial predictions assuming a circular orbit are done for where the object will be for the next hour, and the objects are reacquired as quickly as possible on the CTIO 0.9-m telescope. This second telescope then follows-up during the first night and, if possible, over several more nights to obtain the maximum time arc possible, and the best six parameter orbit. Our goal is to obtain an initial orbit for all detected objects fainter than R = 15th in order to estimate the orbital distribution of objects selected on the basis of two observational criteria: magnitude and angular rate. Objects fainter than 15th are largely uncataloged and have a completely different angular rate distribution than brighter objects. Combining the information obtained for both faint and bright objects yields a more complete picture of the debris environment rather than just concentrating on the faint debris. One objective is to estimate what fraction of objects selected on the basis of angular rate are not at GEO. A second objective is to obtain magnitudes and colors in standard astronomical filters (BVRI) for comparison with reflectance spectra of likely spacecraft materials. This paper reports on results from two 14 night runs with both telescopes: in March and November 2007: (1) A significant fraction of objects fainter than R = 15th have eccentric orbits (e > 0.1) (2) Virtually all objects selected on the basis of angular rate are in the GEO and GTO regimes. (3) Calibrated magnitudes and colors in BVRI were obtained for many objects fainter than R = 15th magnitude. This work is supported by NASA's Orbital Debris Program Office, Johnson Space Center, Houston, Texas, USA.
A Twenty-Year Survey of Novae in M31
NASA Astrophysics Data System (ADS)
Crayton, Hannah; Rector, Travis A.; Walentosky, Matthew J.; Shafter, Allen W.; Lauber, Stephanie; Pilachowski, Catherine A.; RBSE Nova Search Team
2018-06-01
Numerous surveys of M31 in search of extragalactic novae have been completed over the last century, with a total of more than 1000 having been discovered during this time. From these surveys it has been estimated that the number of novae that occur in M31 is approximately 65 yr-1 (Darnley et al. 2006). A fraction of these are recurrent novae that recur on the timescales of years to decades (Shafter et al. 2015). From 1997 to 2017 we completed observations of M31 with the KPNO/WIYN 0.9-meter telescope, which offers a wide field of view suitable for surveying nearly all of the bulge and much of the disk of M31. Observations were completed in Hα so as to better detect novae in the bulge of the galaxy, where most novae reside. Our survey achieves a limiting absolute magnitude per epoch of MHα ∼ 7.5 mag, which prior M31 nova surveys in Hα (e.g., Ciardullo et al. 1987; Shafter & Irby 2001) have shown to be sufficiently deep to detect a typical nova several months after eruption. By completing nearly all of the observations with the same telescope, cameras, and filters we were able to obtain a remarkably consistent dataset.Our survey offers several benefits as compared to prior surveys. Nearly 200 epochs of observations were completed during the survey period. Observations were typically completed on a monthly basis; although on several occasions we completed weekly and nightly observations to search for novae with faster decay rates. Thus we were sensitive to most of the novae that erupted in M31 during the survey period.Over twenty years we detected 316 novae. Our survey found 85% of the novae in M31 that were reported by other surveys completed during the same time range and in the same survey area as ours (Pietsch et al. 2007). We also discovered 39 novae that were not found by other surveys. We present the complete catalog of novae from our survey, along with example light curves. Among other uses, our catalog will be useful for improving estimates of nova rate in M31. We also identify 72 standard stars within the survey area that will be useful for future surveys.
Earthquake rupture below the brittle-ductile transition in continental lithospheric mantle
Prieto, Germán A.; Froment, Bérénice; Yu, Chunquan; Poli, Piero; Abercrombie, Rachel
2017-01-01
Earthquakes deep in the continental lithosphere are rare and hard to interpret in our current understanding of temperature control on brittle failure. The recent lithospheric mantle earthquake with a moment magnitude of 4.8 at a depth of ~75 km in the Wyoming Craton was exceptionally well recorded and thus enabled us to probe the cause of these unusual earthquakes. On the basis of complete earthquake energy balance estimates using broadband waveforms and temperature estimates using surface heat flow and shear wave velocities, we argue that this earthquake occurred in response to ductile deformation at temperatures above 750°C. The high stress drop, low rupture velocity, and low radiation efficiency are all consistent with a dissipative mechanism. Our results imply that earthquake nucleation in the lithospheric mantle is not exclusively limited to the brittle regime; weakening mechanisms in the ductile regime can allow earthquakes to initiate and propagate. This finding has significant implications for understanding deep earthquake rupture mechanics and rheology of the continental lithosphere. PMID:28345055
Earthquake rupture below the brittle-ductile transition in continental lithospheric mantle.
Prieto, Germán A; Froment, Bérénice; Yu, Chunquan; Poli, Piero; Abercrombie, Rachel
2017-03-01
Earthquakes deep in the continental lithosphere are rare and hard to interpret in our current understanding of temperature control on brittle failure. The recent lithospheric mantle earthquake with a moment magnitude of 4.8 at a depth of ~75 km in the Wyoming Craton was exceptionally well recorded and thus enabled us to probe the cause of these unusual earthquakes. On the basis of complete earthquake energy balance estimates using broadband waveforms and temperature estimates using surface heat flow and shear wave velocities, we argue that this earthquake occurred in response to ductile deformation at temperatures above 750°C. The high stress drop, low rupture velocity, and low radiation efficiency are all consistent with a dissipative mechanism. Our results imply that earthquake nucleation in the lithospheric mantle is not exclusively limited to the brittle regime; weakening mechanisms in the ductile regime can allow earthquakes to initiate and propagate. This finding has significant implications for understanding deep earthquake rupture mechanics and rheology of the continental lithosphere.
Estimation of population trajectories from count data
Link, W.A.; Sauer, J.R.
1997-01-01
Monitoring of changes in animal population size is rarely possible through complete censuses; frequently, the only feasible means of monitoring changes in population size is to use counts of animals obtained by skilled observers as indices to abundance. Analysis of changes in population size can be severely biased if factors related to the acquisition of data are not adequately controlled for. In particular we identify two types of observer effects: these correspond to baseline differences in observer competence, and to changes through time in the ability of individual observers. We present a family of models for count data in which the first of these observer effects is treated as a nuisance parameter. Conditioning on totals of negative binomial counts yields a Dirichlet compound multinomial vector for each observer. Quasi-likelihood is used to estimate parameters related to population trajectory and other parameters of interest; model selection is carried out on the basis of Akaike's information criterion. An example is presented using data on Wood thrush from the North American Breeding Bird Survey.
The Installation Restoration Program Toxicology Guide. Volume 3
1989-07-01
zvli UW OF TABLES Volun.e 4 (Cont.) 69-3 Chemical Additives ............................ 69-15 69-4 Acute Toxicity of Components of Mineral Base...659. Values were estimated by Arthur D. little, Inc. uwing Kow as the basis of estimation (See Introduction, VoL 1). Values of less than one are very...Introduction, Vol. 1). 659. Values were etfiinated by Arthur D. little, In~c. uwing Kow as the basis of. estimation (See Introduction, Vol. 1). Values of
Estimating propagation velocity through a surface acoustic wave sensor
Xu, Wenyuan; Huizinga, John S.
2010-03-16
Techniques are described for estimating the propagation velocity through a surface acoustic wave sensor. In particular, techniques which measure and exploit a proper segment of phase frequency response of the surface acoustic wave sensor are described for use as a basis of bacterial detection by the sensor. As described, use of velocity estimation based on a proper segment of phase frequency response has advantages over conventional techniques that use phase shift as the basis for detection.
Lunar Architecture Team - Phase 2 Habitat Volume Estimation: "Caution When Using Analogs"
NASA Technical Reports Server (NTRS)
Rudisill, Marianne; Howard, Robert; Griffin, Brand; Green, Jennifer; Toups, Larry; Kennedy, Kriss
2008-01-01
The lunar surface habitat will serve as the astronauts' home on the moon, providing a pressurized facility for all crew living functions and serving as the primary location for a number of crew work functions. Adequate volume is required for each of these functions in addition to that devoted to housing the habitat systems and crew consumables. The time constraints of the LAT-2 schedule precluded the Habitation Team from conducting a complete "bottoms-up" design of a lunar surface habitation system from which to derive true volumetric requirements. The objective of this analysis was to quickly derive an estimated total pressurized volume and pressurized net habitable volume per crewmember for a lunar surface habitat, using a principled, methodical approach in the absence of a detailed design. Five "heuristic methods" were used: historical spacecraft volumes, human/spacecraft integration standards and design guidance, Earth-based analogs, parametric "sizing" tools, and conceptual point designs. Estimates for total pressurized volume, total habitable volume, and volume per crewmember were derived using these methods. All method were found to provide some basis for volume estimates, but values were highly variable across a wide range, with no obvious convergence of values. Best current assumptions for required crew volume were provided as a range. Results of these analyses and future work are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fanourgakis, Georgios S.; Apra, Edoardo; Xantheas, Sotiris S.
2004-08-08
We report estimates of complete basis set (CBS) limits at the second-order Møller-Plesset perturbation level of theory (MP2) for the binding energies of the lowest lying isomers within each of the four major families of minima of (H2O)20. These were obtained by performing MP2 calculations with the family of correlation-consistent basis sets up to quadruple zeta quality, augmented with additional diffuse functions (aug-cc-pVnZ, n=D, T, Q). The MP2/CBS estimates are: -200.1 kcal/mol (dodecahedron, 30 hydrogen bonds), -212.6 kcal/mol (fused cubes, 36 hydrogen bonds), -215.0 (face-sharing pentagonal prisms, 35 hydrogen bonds) and –217.9 kcal/mol (edge-sharing pentagonal prisms, 34 hydrogen bonds). Themore » energetic ordering of the various (H2O)20 isomers does not follow monotonically the number of hydrogen bonds as in the case of smaller clusters such as the different isomers of the water hexamer. The dodecahedron lies ca. 18 kcal/mol higher in energy than the most stable edge-sharing pentagonal prism isomer. The TIP4P, ASP-W4, TTM2-R, AMOEBA and TTM2-F empirical potentials also predict the energetic stabilization of the edge-sharing pentagonal prisms with respect to the dodecahedron, albeit they universally underestimate the cluster binding energies with respect to the MP2/CBS result. Among them, the TTM2-F potential was found to predict the absolute cluster binding energies to within < 1% from the corresponding MP2/CBS values, whereas the error for the rest of the potentials considered in this study ranges from 3-5%.« less
On nonstationarity-related errors in modal combination rules of the response spectrum method
NASA Astrophysics Data System (ADS)
Pathak, Shashank; Gupta, Vinay K.
2017-10-01
Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.
Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2017-05-01
The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).
2005-07-26
Audit Report Cost-to-Complete Estimates and Financial Reporting for the Management of the Iraq Relief and Reconstruction...Complete Estimates and Financial Reporting for the Management of the Iraq Relief and Reconstruction Fund 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c...RECONSTRUCTION MANAGEMENT OFFICE DIRECTOR, PROJECT AND CONTRACTING OFFICE SUBJECT: Cost-to-Complete Estimates and Financial Reporting for the Management of
Modern psychometrics for assessing achievement goal orientation: a Rasch analysis.
Muis, Krista R; Winne, Philip H; Edwards, Ordene V
2009-09-01
A program of research is needed that assesses the psychometric properties of instruments designed to quantify students' achievement goal orientations to clarify inconsistencies across previous studies and to provide a stronger basis for future research. We conducted traditional psychometric and modern Rasch-model analyses of the Achievement Goals Questionnaire (AGQ, Elliot & McGregor, 2001) and the Patterns of Adaptive Learning Scale (PALS, Midgley et al., 2000) to provide an in-depth analysis of the two most popular instruments in educational psychology. For Study 1, 217 undergraduate students enrolled in educational psychology courses participated. Thirty-four were male and 181 were female (two did not respond). Participants completed the AGQ in the context of their educational psychology class. For Study 2, 126 undergraduate students enrolled in educational psychology courses participated. Thirty were male and 95 were female (one did not respond). Participants completed the PALS in the context of their educational psychology class. Traditional psychometric assessments of the AGQ and PALS replicated previous studies. For both, reliability estimates ranged from good to very good for raw subscale scores and fit for the models of goal orientations were good. Based on traditional psychometrics, the AGQ and PALS are valid and reliable indicators of achievement goals. Rasch analyses revealed that estimates of reliability for items were very good but respondent ability estimates varied from poor to good for both the AGQ and PALS. These findings indicate that items validly and reliably reflect a group's aggregate goal orientation, but using either instrument to characterize an individual's goal orientation is hazardous.
CO2 and CO emission rates from three forest fire controlled experiments in Western Amazonia
NASA Astrophysics Data System (ADS)
Carvalho, J. A., Jr.; Amaral, S. S.; Costa, M. A. M.; Soares Neto, T. G.; Veras, C. A. G.; Costa, F. S.; van Leeuwen, T. T.; Krieger Filho, G. C.; Tourigny, E.; Forti, M. C.; Fostier, A. H.; Siqueira, M. B.; Santos, J. C.; Lima, B. A.; Cascão, P.; Ortega, G.; Frade, E. F., Jr.
2016-06-01
Forests represent an important role in the control of atmospheric emissions through carbon capture. However, in forest fires, the carbon stored during photosynthesis is released into the atmosphere. The carbon quantification, in forest burning, is important for the development of measures for its control. The aim of this study was to quantify CO2 and CO emissions of forest fires in Western Amazonia. In this paper, results are described of forest fire experiments conducted in Cruzeiro do Sul and Rio Branco, state of Acre, and Candeias do Jamari, state of Rondônia, Brazil. These cities are located in the Western portion of the Brazilian Amazon region. The biomass content per hectare, in the virgin forest, was measured by indirect methods using formulas with parameters of forest inventories in the central hectare of the test site. The combustion completeness was estimated by randomly selecting 10% of the total logs and twelve 2 × 2 m2 areas along three transects and examining their consumption rates by the fire. The logs were used to determine the combustion completeness of the larger materials (characteristic diameters larger than 10 cm) and the 2 × 2 m2 areas to determine the combustion completeness of small-size materials (those with characteristic diameters lower than 10 cm) and the. The overall biomass consumption by fire was estimated to be 40.0%, 41.2% and 26.2%, in Cruzeiro do Sul, Rio Branco and Candeias do Jamari, respectively. Considering that the combustion gases of carbon in open fires contain approximately 90.0% of CO2 and 10.0% of CO in volumetric basis, the average emission rates of these gases by the burning process, in the three sites, were estimated as 191 ± 46.7 t ha-1 and 13.5 ± 3.3 t ha-1, respectively.
Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process
Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.
2013-01-01
Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531
Improvements in geothermometry. Final technical report. Rev
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, J.; Dibble, W.; Parks, G.
1982-08-01
Alkali and alkaline earth geothermometers are useful for estimating geothermal reservoir temperatures, though a general theoretical basis has yet to be established and experimental calibration needs improvement. Equilibrium cation exchange between feldspars provided the original basis for the Na-K and Na-K-Ca geothermometers (Fournier and Truesdell, 1973), but theoretical, field and experimental evidence prove that neither equilibrium nor feldspars are necessary. Here, evidence is summarized in support of these observations, concluding that these geothermometers can be expected to have a surprisingly wide range of applicability, but that the reasons behind such broad applicability are not yet understood. Early experimental work provedmore » that water-rock interactions are slow at low temperatures, so experimental calibration at temperatures below 150/sup 0/ is impractical. Theoretical methods and field data were used instead for all work at low temperatures. Experimental methods were emphasized for temperatures above 150/sup 0/C, and the simplest possible solid and solution compositions were used to permit investigation of one process or question at a time. Unexpected results in experimental work prevented complete integration of the various portions of the investigation.« less
An ab initio study of the C3(+) cation using multireference methods
NASA Technical Reports Server (NTRS)
Taylor, Peter R.; Martin, J. M. L.; Francois, J. P.; Gijbels, R.
1991-01-01
The energy difference between the linear 2 sigma(sup +, sub u) and cyclic 2B(sub 2) structures of C3(+) has been investigated using large (5s3p2d1f) basis sets and multireference electron correlation treatments, including complete active space self consistent fields (CASSCF), multireference configuration interaction (MRCI), and averaged coupled-pair functional (ACPF) methods, as well as the single-reference quadratic configuration interaction (QCISD(T)) method. Our best estimate, including a correction for basis set incompleteness, is that the linear form lies above the cyclic from by 5.2(+1.5 to -1.0) kcal/mol. The 2 sigma(sup +, sub u) state is probably not a transition state, but a local minimum. Reliable computation of the cyclic/linear energy difference in C3(+) is extremely demanding of the electron correlation treatment used: of the single-reference methods previously considered, CCSD(T) and QCISD(T) perform best. The MRCI + Q(0.01)/(4s2p1d) energy separation of 1.68 kcal/mol should provide a comparison standard for other electron correlation methods applied to this system.
Life cycle assessment of metals: a scientific synthesis.
Nuss, Philip; Eckelman, Matthew J
2014-01-01
We have assembled extensive information on the cradle-to-gate environmental burdens of 63 metals in their major use forms, and illustrated the interconnectedness of metal production systems. Related cumulative energy use, global warming potential, human health implications and ecosystem damage are estimated by metal life cycle stage (i.e., mining, purification, and refining). For some elements, these are the first life cycle estimates of environmental impacts reported in the literature. We show that, if compared on a per kilogram basis, the platinum group metals and gold display the highest environmental burdens, while many of the major industrial metals (e.g., iron, manganese, titanium) are found at the lower end of the environmental impacts scale. If compared on the basis of their global annual production in 2008, iron and aluminum display the largest impacts, and thallium and tellurium the lowest. With the exception of a few metals, environmental impacts of the majority of elements are dominated by the purification and refining stages in which metals are transformed from a concentrate into their metallic form. Out of the 63 metals investigated, 42 metals are obtained as co-products in multi output processes. We test the sensitivity of varying allocation rationales, in which the environmental burden are allocated to the various metal and mineral products, on the overall results. Monte-Carlo simulation is applied to further investigate the stability of our results. This analysis is the most comprehensive life cycle comparison of metals to date and allows for the first time a complete bottom-up estimate of life cycle impacts of the metals and mining sector globally. We estimate global direct and indirect greenhouse gas emissions in 2008 at 3.4 Gt CO2-eq per year and primary energy use at 49 EJ per year (9.5% of global use), and report the shares for all metals to both impact categories.
Life Cycle Assessment of Metals: A Scientific Synthesis
Nuss, Philip; Eckelman, Matthew J.
2014-01-01
We have assembled extensive information on the cradle-to-gate environmental burdens of 63 metals in their major use forms, and illustrated the interconnectedness of metal production systems. Related cumulative energy use, global warming potential, human health implications and ecosystem damage are estimated by metal life cycle stage (i.e., mining, purification, and refining). For some elements, these are the first life cycle estimates of environmental impacts reported in the literature. We show that, if compared on a per kilogram basis, the platinum group metals and gold display the highest environmental burdens, while many of the major industrial metals (e.g., iron, manganese, titanium) are found at the lower end of the environmental impacts scale. If compared on the basis of their global annual production in 2008, iron and aluminum display the largest impacts, and thallium and tellurium the lowest. With the exception of a few metals, environmental impacts of the majority of elements are dominated by the purification and refining stages in which metals are transformed from a concentrate into their metallic form. Out of the 63 metals investigated, 42 metals are obtained as co-products in multi output processes. We test the sensitivity of varying allocation rationales, in which the environmental burden are allocated to the various metal and mineral products, on the overall results. Monte-Carlo simulation is applied to further investigate the stability of our results. This analysis is the most comprehensive life cycle comparison of metals to date and allows for the first time a complete bottom-up estimate of life cycle impacts of the metals and mining sector globally. We estimate global direct and indirect greenhouse gas emissions in 2008 at 3.4 Gt CO2-eq per year and primary energy use at 49 EJ per year (9.5% of global use), and report the shares for all metals to both impact categories. PMID:24999810
Predicting emissions from oil and gas operations in the Uinta Basin, Utah.
Wilkey, Jonathan; Kelly, Kerry; Jaramillo, Isabel Cristina; Spinti, Jennifer; Ring, Terry; Hogue, Michael; Pasqualini, Donatella
2016-05-01
In this study, emissions of ozone precursors from oil and gas operations in Utah's Uinta Basin are predicted (with uncertainty estimates) from 2015-2019 using a Monte-Carlo model of (a) drilling and production activity, and (b) emission factors. Cross-validation tests against actual drilling and production data from 2010-2014 show that the model can accurately predict both types of activities, returning median results that are within 5% of actual values for drilling, 0.1% for oil production, and 4% for gas production. A variety of one-time (drilling) and ongoing (oil and gas production) emission factors for greenhouse gases, methane, and volatile organic compounds (VOCs) are applied to the predicted oil and gas operations. Based on the range of emission factor values reported in the literature, emissions from well completions are the most significant source of emissions, followed by gas transmission and production. We estimate that the annual average VOC emissions rate for the oil and gas industry over the 2010-2015 time period was 44.2E+06 (mean) ± 12.8E+06 (standard deviation) kg VOCs per year (with all applicable emissions reductions). On the same basis, over the 2015-2019 period annual average VOC emissions from oil and gas operations are expected to drop 45% to 24.2E+06 ± 3.43E+06 kg VOCs per year, due to decreases in drilling activity and tighter emission standards. This study improves upon previous methods for estimating emissions of ozone precursors from oil and gas operations in Utah's Uinta Basin by tracking one-time and ongoing emission events on a well-by-well basis. The proposed method has proven highly accurate at predicting drilling and production activity and includes uncertainty estimates to describe the range of potential emissions inventory outcomes. If similar input data are available in other oil and gas producing regions, then the method developed here could be applied to those regions as well.
Photometry of occultation candidate stars. I - Uranus 1985 and Saturn 1985-1991
NASA Technical Reports Server (NTRS)
French, L. M.; Morales, G.; Dalton, A. S.; Klavetter, J. J.; Conner, S. R.
1985-01-01
Photometric observations of five stars to be occulted by the rings around Uranus are presented. The four stars to be occulted by Saturn or its rings during the period 1985-1991 were also observed. The observations were carried out with a CCD detector attached to the Kitt Peak McGraw-Hill 1.30-m telescope. Landolt standards of widely ranging V-I color indices were used to determine the extinction coefficients, transformation coefficients, and zero points of the stars. Mean extinction coefficients are given for each night of observation. K magnitudes for each star were estimated on the basis of the results of Johnson (1967). The complete photometric data set is given in a series of tables.
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.
1993-01-01
The protonation of N2O and the intramolecular proton transfer in N2OH(+) are studied using various basis sets and a variety of methods, including second-order many-body perturbation theory (MP2), singles and doubles coupled cluster (CCSD), the augmented coupled cluster (CCSD/T/), and complete active space self-consistent field (CASSCF) methods. For geometries, MP2 leads to serious errors even for HNNO(+); for the transition state, only CCSD/T/ produces a reliable geometry due to serious nondynamical correlation effects. The proton affinity at 298.15 K is estimated at 137.6 kcal/mol, in close agreement with recent experimental determinations of 137.3 +/- 1 kcal/mol.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
Olsen, Jerry S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Watts, Julia A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Allison, Linda J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2001-01-01
In 1980, this data base and the corresponding map were completed after more than 20 years of field investigations, consultations, and analyses of published literature. They characterize the use and vegetative cover of the Earth's land surface with a 0.5° × 0.5° grid. This world-ecosystem-complex data set and the accompanying map provide a current reference base for interpreting the role of vegetation in the global cycling of CO2 and other gases and a basis for improved estimates of vegetation and soil carbon, of natural exchanges of CO2, and of net historic shifts of carbon between the biosphere and the atmosphere.
Improved Feature Matching for Mobile Devices with IMU.
Masiero, Andrea; Vettore, Antonio
2016-08-05
Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Vio, Gareth A.; Andrianne, Thomas; azak, Norizham Abudl; Dimitriadis, Grigorios
2012-01-01
The stall flutter response of a rectangular wing in a low speed wind tunnel is modelled using a nonlinear difference equation description. Static and dynamic tests are used to select a suitable model structure and basis function. Bifurcation criteria such as the Hopf condition and vibration amplitude variation with airspeed were used to ensure the model was representative of experimentally measured stall flutter phenomena. Dynamic test data were used to estimate model parameters and estimate an approximate basis function.
Aspartic acid racemisation in purified elastin from arteries as basis for age estimation.
Dobberstein, R C; Tung, S-M; Ritz-Timme, S
2010-07-01
Aspartic acid racemisation (AAR) results in an age-dependent accumulation of D: -aspartic acid in durable human proteins and can be used as a basis for age estimation. Routinely, age estimation based on AAR is performed by analysis of dentine. However, in forensic practise, teeth are not always available. Non-dental tissues for age estimation may be suitable for age estimation based on AAR if they contain durable proteins that can be purified and analysed. Elastin is such a durable protein. To clarify if purified elastin from arteries is a suitable sample for biochemical age estimation, AAR was determined in purified elastin from arteries from individuals of known age (n = 68 individuals, including n = 15 putrefied corpses), considering the influence of different stages of atherosclerosis and putrefaction on the AAR values. AAR was found to increase with age. The relationship between AAR and age was good enough to serve as basis for age estimation, but worse than known from dentinal proteins. Intravital and post-mortem degradation of elastin may have a moderate effect on the AAR values. Age estimation based on AAR in purified elastin from arteries may be a valuable additional tool in the identification of unidentified cadavers, especially in cases where other methods cannot be applied (e.g., no available teeth and body parts).
Oyeyemi, Victor B; Pavone, Michele; Carter, Emily A
2011-12-09
Quantum chemistry has become one of the most reliable tools for characterizing the thermochemical underpinnings of reactions, such as bond dissociation energies (BDEs). The accurate prediction of these particular properties (BDEs) are challenging for ab initio methods based on perturbative corrections or coupled cluster expansions of the single-determinant Hartree-Fock wave function: the processes of bond breaking and forming are inherently multi-configurational and require an accurate description of non-dynamical electron correlation. To this end, we present a systematic ab initio approach for computing BDEs that is based on three components: 1) multi-reference single and double excitation configuration interaction (MRSDCI) for the electronic energies; 2) a two-parameter scheme for extrapolating MRSDCI energies to the complete basis set limit; and 3) DFT-B3LYP calculations of minimum-energy structures and vibrational frequencies to account for zero point energy and thermal corrections. We validated our methodology against a set of reliable experimental BDE values of CC and CH bonds of hydrocarbons. The goal of chemical accuracy is achieved, on average, without applying any empirical corrections to the MRSDCI electronic energies. We then use this composite scheme to make predictions of BDEs in a large number of hydrocarbon molecules for which there are no experimental data, so as to provide needed thermochemical estimates for fuel molecules. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Varandas, A J C
2009-02-01
The potential energy surface for the C(20)-He interaction is extrapolated for three representative cuts to the complete basis set limit using second-order Møller-Plesset perturbation calculations with correlation consistent basis sets up to the doubly augmented variety. The results both with and without counterpoise correction show consistency with each other, supporting that extrapolation without such a correction provides a reliable scheme to elude the basis-set-superposition error. Converged attributes are obtained for the C(20)-He interaction, which are used to predict the fullerene dimer ones. Time requirements show that the method can be drastically more economical than the counterpoise procedure and even competitive with Kohn-Sham density functional theory for the title system.
A Fine Physical Map of the Rice Chromosome 4
Zhao, Qiang; Zhang, Yu; Cheng, Zhukuan; Chen, Mingsheng; Wang, Shengyue; Feng, Qi; Huang, Yucheng; Li, Ying; Tang, Yesheng; Zhou, Bo; Chen, Zhehua; Yu, Shuliang; Zhu, Jingjie; Hu, Xin; Mu, Jie; Ying, Kai; Hao, Pei; Zhang, Lei; Lu, Yiqi; Zhang, Lei S.; Liu, Yilei; Yu, Zhen; Fan, Danlin; Weng, Qijun; Chen, Ling; Lu, Tingting; Liu, Xiaohui; Jia, Peixin; Sun, Tongguo; Wu, Yongrui; Zhang, Yujun; Lu, Ying; Li, Can; Wang, Rong; Lei, Haiyan; Li, Tao; Hu, Hao; Wu, Mei; Zhang, Runquan; Guan, Jianping; Zhu, Jia; Fu, Gang; Gu, Minghong; Hong, Guofan; Xue, Yongbiao; Wing, Rod; Jiang, Jiming; Han, Bin
2002-01-01
As part of an international effort to completely sequence the rice genome, we have produced a fine bacterial artificial chromosome (BAC)-based physical map of the Oryza sativa japonica Nipponbare chromosome 4 through an integration of 114 sequenced BAC clones from a taxonomically related subspecies O. sativa indica Guangluai 4 and 182 RFLP and 407 expressed sequence tag (EST) markers with the fingerprinted data of the Nipponbare genome. The map consists of 11 contigs with a total length of 34.5 Mb covering 94% of the estimated chromosome size (36.8 Mb). BAC clones corresponding to telomeres, as well as to the centromere position, were determined by BAC-pachytene chromosome fluorescence in situ hybridization (FISH). This gave rise to an estimated length ratio of 5.13 for the long arm and 2.9 for the short arm (on the basis of the physical map), which indicates that the short arm is a highly condensed one. The FISH analysis and physical mapping also showed that the short arm and the pericentromeric region of the long arm are rich in heterochromatin, which occupied 45% of the chromosome, indicating that this chromosome is likely very difficult to sequence. To our knowledge, this map provides the first example of a rapid and reliable physical mapping on the basis of the integration of the data from two taxonomically related subspecies. [The following individuals and institutions kindly provided reagents, samples, or unpublished information as indicated in the paper: S. McCouch, T. Sasaki, and Monsanto.] PMID:11997348
Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA
Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.
2007-01-01
Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Jonathon; Molecular Foundry, Lawrence Berkeley National Laboratory, Berkeley, California 94720; Neaton, Jeffrey B.
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methodsmore » and systems examined, the most complete basis is Jensen’s pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook
2015-03-07
We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2016-05-01
With the aim of systematically characterizing the convergence of common families of basis sets such that general recommendations for basis sets can be made, we have tested a wide variety of basis sets against complete-basis binding energies across the S22 set of intermolecular interactions—noncovalent interactions of small and medium-sized molecules consisting of first- and second-row atoms—with three distinct density functional approximations: SPW92, a form of local-density approximation; B3LYP, a global hybrid generalized gradient approximation; and B97M-V, a meta-generalized gradient approximation with nonlocal correlation. We have found that it is remarkably difficult to reach the basis set limit; for the methods and systems examined, the most complete basis is Jensen's pc-4. The Dunning correlation-consistent sequence of basis sets converges slowly relative to the Jensen sequence. The Karlsruhe basis sets are quite cost effective, particularly when a correction for basis set superposition error is applied: counterpoise-corrected def2-SVPD binding energies are better than corresponding energies computed in comparably sized Dunning and Jensen bases, and on par with uncorrected results in basis sets 3-4 times larger. These trends are exhibited regardless of the level of density functional approximation employed. A sense of the magnitude of the intrinsic incompleteness error of each basis set not only provides a foundation for guiding basis set choice in future studies but also facilitates quantitative comparison of existing studies on similar types of systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1994-11-30
Universal Oil Products, Inc. (UOP) of Des Plaines, Illinois has contracted A.E. Roberts & Associates, Inc. (AERA) of Atlanta, Georgia to prepare a sensitivity analysis for the development of the Fluidized-bed Copper Oxide (FBCO) process. As proposed by AERA in September 1991, development of the FBCO process design for a 500 mega-watt (MW) unit was divided into three tasks: (1) Establishment of a Conceptual Design, (2) Conceptual Design, (3) Cost Analysis Task 1 determined the basis for a conceptual design for the 500 megawatt (MW) FBCO process. It was completed by AERA in September of 1992, and a report wasmore » submitted at that time {open_quotes}Establishment of the Design Basis for Application to a 500 MW Coal-fired Facility.{close_quotes} Task 2 gathered all pertinent data available to date and reviewed its applicability to the 500 MW FBCO process. Work on this task was carried out on a joint basis by the AERA team members: Roberts & Schaefers worked on the dense phase transport aspect of the design; Cornell and Carnegie Mellon Universities worked on the design kinetics and modeling; and AERA contributed commercial power and combustion experience. Task 3 provides budgetary cost estimates for the FBCO process and competing alternative technologies for sulfur dioxide and nitrogen oxide removal.« less
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2.
Solomonik, Victor G; Smirnov, Alexander N; Navarkin, Ilya S
2016-04-14
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Composite vibrational spectroscopy of the group 12 difluorides: ZnF2, CdF2, and HgF2
NASA Astrophysics Data System (ADS)
Solomonik, Victor G.; Smirnov, Alexander N.; Navarkin, Ilya S.
2016-04-01
The vibrational spectra of group 12 difluorides, MF2 (M = Zn, Cd, Hg), were investigated via coupled cluster singles, doubles, and perturbative triples, CCSD(T), including core correlation, with a series of correlation consistent basis sets ranging in size from triple-zeta through quintuple-zeta quality, which were then extrapolated to the complete basis set (CBS) limit using a variety of extrapolation procedures. The explicitly correlated coupled cluster method, CCSD(T)-F12b, was employed as well. Although exhibiting quite different convergence behavior, the F12b method yielded the CBS limit estimates closely matching more computationally expensive conventional CBS extrapolations. The convergence with respect to basis set size was examined for the contributions entering into composite vibrational spectroscopy, including those from higher-order correlation accounted for through the CCSDT(Q) level of theory, second-order spin-orbit coupling effects assessed within four-component and two-component relativistic formalisms, and vibrational anharmonicity evaluated via a perturbative treatment. Overall, the composite results are in excellent agreement with available experimental values, except for the CdF2 bond-stretching frequencies compared to spectral assignments proposed in a matrix isolation infrared and Raman study of cadmium difluoride vapor species [Loewenschuss et al., J. Chem. Phys. 50, 2502 (1969); Givan and Loewenschuss, J. Chem. Phys. 72, 3809 (1980)]. These assignments are called into question in the light of the composite results.
Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates
NASA Technical Reports Server (NTRS)
1975-01-01
Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
Bryantsev, Vyacheslav S; Diallo, Mamadou S; van Duin, Adri C T; Goddard, William A
2009-04-14
In this paper we assess the accuracy of the B3LYP, X3LYP, and newly developed M06-L, M06-2X, and M06 functionals to predict the binding energies of neutral and charged water clusters including (H2O)n, n = 2-8, 20), H3O(+)(H2O)n, n = 1-6, and OH(-)(H2O)n, n = 1-6. We also compare the predicted energies of two ion hydration and neutralization reactions on the basis of the calculated binding energies. In all cases, we use as benchmarks calculated binding energies of water clusters extrapolated to the complete basis set limit of the second-order Møller-Plesset perturbation theory with the effects of higher order correlation estimated at the coupled-cluster theory with single, double, and perturbative triple excitations in the aug-cc-pVDZ basis set. We rank the accuracy of the functionals on the basis of the mean unsigned error (MUE) between calculated benchmark and density functional theory energies. The corresponding MUE (kcal/mol) for each functional is listed in parentheses. We find that M06-L (0.73) and M06 (0.84) give the most accurate binding energies using very extended basis sets such as aug-cc-pV5Z. For more affordable basis sets, the best methods for predicting the binding energies of water clusters are M06-L/aug-cc-pVTZ (1.24), B3LYP/6-311++G(2d,2p) (1.29), and M06/aug-cc-PVTZ (1.33). M06-L/aug-cc-pVTZ also gives more accurate energies for the neutralization reactions (1.38), whereas B3LYP/6-311++G(2d,2p) gives more accurate energies for the ion hydration reactions (1.69).
Porter, Kimberly R; McCarthy, Bridget J; Freels, Sally; Kim, Yoonsang; Davis, Faith G
2010-06-01
Prevalence is the best indicator of cancer survivorship in the population, but few studies have focused on brain tumor prevalence because of previous data limitations. Hence, the full impact of primary brain tumors on the healthcare system in the United States is not completely described. The present study provides an estimate of the prevalence of disease in the United States, updating an earlier prevalence study. Incidence data for 2004 and survival data for 1985-2005 were obtained by the Central Brain Tumor Registry of the United States from selected regions, modeled under 2 different survival assumptions, to estimate prevalence rates for the year 2004 and projected estimates for 2010. The overall incidence rate for primary brain tumors was 18.1 per 100 000 person-years with 2-, 5-, 10-, and 20-year observed survival rates of 62%, 54%, 45%, and 30%, respectively. On the basis of the sum of nonmalignant and averaged malignant estimates, the overall prevalence rate of individuals with a brain tumor was estimated to be 209.0 per 100 000 in 2004 and 221.8 per 100 000 in 2010. The female prevalence rate (264.8 per 100 000) was higher than that in males (158.7 per 100 000). The averaged prevalence rate for malignant tumors (42.5 per 100 000) was lower than the prevalence for nonmalignant tumors (166.5 per 100 000). This study provides estimates of the 2004 (n = 612 770) and 2010 (n = 688 096) expected number of individuals living with primary brain tumor diagnoses in the United States, providing more current and robust estimates for aiding healthcare planning and patient advocacy for an aging US population.
On the optimization of Gaussian basis sets
NASA Astrophysics Data System (ADS)
Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.
2003-01-01
A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
Physicochemical basis for water-actuated movement and stress generation in nonliving plant tissues.
Bertinetti, L; Fischer, F D; Fratzl, P
2013-12-06
Generating stresses and strains through water uptake from atmospheric humidity is a common process in nature, e.g., in seed dispersal. Actuation depends on a balance between chemical interactions and the elastic energy required to accomplish the volume change. In order to study the poorly understood chemical interactions, we combine mechanosorption experiments with theoretical calculations of the swelling behavior to estimate the mechanical energy and extract the contribution of the chemical energy per absorbed water molecule. The latter is highest in the completely dry state and stays almost constant at about 1.2 kT for higher hydrations. This suggests that water bound to the macromolecular components of the wood tissues acquires one additional hydrogen bond per eight water molecules, thus providing energy for actuation.
Identification of the odour and chemical composition of alumina refinery air emissions.
Coffey, P S; Ioppolo-Armanios, M
2004-01-01
Alcoa World Alumina Australia has undertaken comprehensive air emissions monitoring aimed at characterising and quantifying the complete range of emissions to the atmosphere from Bayer refining of alumina at its Western Australian refineries. To the best of our knowledge, this project represents the most complete air emissions inventory of a Bayer refinery conducted in the worldwide alumina industry. It adds considerably to knowledge of air emission factors available for use in emissions estimation required under national pollutant release and transfer registers (NPRTs), such as the Toxic Releases Inventory, USA, and the National Pollutant Inventory, Australia. It also allows the preliminary identification of the key chemical components responsible for characteristic alumina refinery odours and the contribution of these components to the quality, or hedonic tone, of the odours. The strength and acceptability of refinery odours to employees and neighbours appears to be dependent upon where and in what proportion the odorous gases have been emitted from the refineries. This paper presents the results of the programme and develops a basis for classifying the odour properties of the key emission sources in the alumina-refining process.
24 CFR 200.222 - Certification of previous record on basis of a master list.
Code of Federal Regulations, 2010 CFR
2010-04-01
... basis of a master list. 200.222 Section 200.222 Housing and Urban Development Regulations Relating to... Certification of previous record on basis of a master list. A principal may avoid repetitious listings by providing HUD with a complete master list, acceptable to the Participation Control Officer, of all projects...
24 CFR 200.222 - Certification of previous record on basis of a master list.
Code of Federal Regulations, 2014 CFR
2014-04-01
... basis of a master list. 200.222 Section 200.222 Housing and Urban Development Regulations Relating to... Certification of previous record on basis of a master list. A principal may avoid repetitious listings by providing HUD with a complete master list, acceptable to the Participation Control Officer, of all projects...
24 CFR 200.222 - Certification of previous record on basis of a master list.
Code of Federal Regulations, 2013 CFR
2013-04-01
... basis of a master list. 200.222 Section 200.222 Housing and Urban Development Regulations Relating to... Certification of previous record on basis of a master list. A principal may avoid repetitious listings by providing HUD with a complete master list, acceptable to the Participation Control Officer, of all projects...
24 CFR 200.222 - Certification of previous record on basis of a master list.
Code of Federal Regulations, 2012 CFR
2012-04-01
... basis of a master list. 200.222 Section 200.222 Housing and Urban Development Regulations Relating to... Certification of previous record on basis of a master list. A principal may avoid repetitious listings by providing HUD with a complete master list, acceptable to the Participation Control Officer, of all projects...
24 CFR 200.222 - Certification of previous record on basis of a master list.
Code of Federal Regulations, 2011 CFR
2011-04-01
... basis of a master list. 200.222 Section 200.222 Housing and Urban Development Regulations Relating to... Certification of previous record on basis of a master list. A principal may avoid repetitious listings by providing HUD with a complete master list, acceptable to the Participation Control Officer, of all projects...
14 CFR 152.315 - Reporting on accrual basis.
Code of Federal Regulations, 2010 CFR
2010-01-01
...) Except as provided in paragraph (b) of this section each sponsor or planning agency shall submit all financial reports on an accrual basis. (b) If records are not maintained on an accrual basis by a sponsor or planning agency, reports may be based on an analysis of records or best estimates. ...
24 CFR Appendix C to Part 3500 - Instructions for Completing Good Faith Estimate (GFE) Form
Code of Federal Regulations, 2012 CFR
2012-04-01
... 24 Housing and Urban Development 5 2012-04-01 2012-04-01 false Instructions for Completing Good Faith Estimate (GFE) Form C Appendix C to Part 3500 Housing and Urban Development Regulations Relating.... 3500, App. C Appendix C to Part 3500—Instructions for Completing Good Faith Estimate (GFE) Form The...
Padoan, Andrea; Antonelli, Giorgia; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-10-26
The present study was prompted by the ISO 15189 requirements that medical laboratories should estimate measurement uncertainty (MU). The method used to estimate MU included the: a) identification of quantitative tests, b) classification of tests in relation to their clinical purpose, and c) identification of criteria to estimate the different MU components. Imprecision was estimated using long-term internal quality control (IQC) results of the year 2016, while external quality assessment schemes (EQAs) results obtained in the period 2015-2016 were used to estimate bias and bias uncertainty. A total of 263 measurement procedures (MPs) were analyzed. On the basis of test purpose, in 51 MPs imprecision only was used to estimate MU; in the remaining MPs, the bias component was not estimable for 22 MPs because EQAs results did not provide reliable statistics. For a total of 28 MPs, two or more MU values were calculated on the basis of analyte concentration levels. Overall, results showed that uncertainty of bias is a minor factor contributing to MU, the bias component being the most relevant contributor to all the studied sample matrices. The model chosen for MU estimation allowed us to derive a standardized approach for bias calculation, with respect to the fitness-for-purpose of test results. Measurement uncertainty estimation could readily be implemented in medical laboratories as a useful tool in monitoring the analytical quality of test results since they are calculated using a combination of both the long-term imprecision IQC results and bias, on the basis of EQAs results.
The role of remotely-sensed evapotranspiration data in watershed water resources management
NASA Astrophysics Data System (ADS)
Shuster, W.; Carroll, M.; Zhang, Y.
2006-12-01
Evapotranspiration (ET) is an important component of the watershed hydrologic cycle and a key factor to consider in water resource planning. Partly due to the loss of evaporation pans from the national network in the 1980s because of budget cuts, ET values are not available in many locations in the US and practitioners often have to rely on the climatically averaged regional estimates instead. Several new approaches have been developed for estimating ET via remote sensing. In this study we employ one established approach that allows us to derive ET estimates on 1 km2 resolution on the basis of AVHRR brightness temperature. By applying this method to southwestern Ohio we obtain ET estimates for a 2 km2 partially suburban watershed near Cincinnati, OH. Along with precipitation and surface discharge measurements, these remotely-sensed ET estimates form the basis for determining both long and short term water budgets for this watershed. These ET estimates are next compared with regional climatic values on a seasonal basis to examine the potential differences that can be introduced to our conceptualization of the watershed processes by considering area- specific ET values. We then discuss implications of this work for more widespread application to watershed management imperatives (e.g., stream ecological health).
Problematic video game use: estimated prevalence and associations with mental and physical health.
Mentzoni, Rune Aune; Brunborg, Geir Scott; Molde, Helge; Myrseth, Helga; Skouverøe, Knut Joachim Mår; Hetland, Jørn; Pallesen, Ståle
2011-10-01
A nationwide survey was conducted to investigate the prevalence of video game addiction and problematic video game use and their association with physical and mental health. An initial sample comprising 2,500 individuals was randomly selected from the Norwegian National Registry. A total of 816 (34.0 percent) individuals completed and returned the questionnaire. The majority (56.3 percent) of respondents used video games on a regular basis. The prevalence of video game addiction was estimated to be 0.6 percent, with problematic use of video games reported by 4.1 percent of the sample. Gender (male) and age group (young) were strong predictors for problematic use of video games. A higher proportion of high frequency compared with low frequency players preferred massively multiplayer online role-playing games, although the majority of high frequency players preferred other game types. Problematic use of video games was associated with lower scores on life satisfaction and with elevated levels of anxiety and depression. Video game use was not associated with reported amount of physical exercise.
Gait recognition based on Gabor wavelets and modified gait energy image for human identification
NASA Astrophysics Data System (ADS)
Huang, Deng-Yuan; Lin, Ta-Wei; Hu, Wu-Chih; Cheng, Chih-Hsiang
2013-10-01
This paper proposes a method for recognizing human identity using gait features based on Gabor wavelets and modified gait energy images (GEIs). Identity recognition by gait generally involves gait representation, extraction, and classification. In this work, a modified GEI convolved with an ensemble of Gabor wavelets is proposed as a gait feature. Principal component analysis is then used to project the Gabor-wavelet-based gait features into a lower-dimension feature space for subsequent classification. Finally, support vector machine classifiers based on a radial basis function kernel are trained and utilized to recognize human identity. The major contributions of this paper are as follows: (1) the consideration of the shadow effect to yield a more complete segmentation of gait silhouettes; (2) the utilization of motion estimation to track people when walkers overlap; and (3) the derivation of modified GEIs to extract more useful gait information. Extensive performance evaluation shows a great improvement of recognition accuracy due to the use of shadow removal, motion estimation, and gait representation using the modified GEIs and Gabor wavelets.
Minimum prevalence of chromosome 22q11 deletions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, D.I.; Cross, I.E.; Burn, J.
1994-09-01
Submicroscopic deletions from within chromosome 22q11 are associated with DiGeorge (DGS), velocardiofacial (VCFS) and conotruncal anomaly syndromes and isolated congenital heart defects. In 1993 our pediatric cardiologists clinically referred all children in whom a chromosome 22q11 deletion was suspected for fluorescent in situ hybridization studies using probes from the DGS critical region. 10 affected individuals have been identified to date from the children born in 1993 in the Northern Region served exclusively by our center. A further case, the subsequent pregnancy in one of these families was affected and terminated on the basis of a major heart malformation. In themore » years 1988-92, for which we have complete ascertainment, there were 1009 heart defects among 191,700 births (mean 202 per annum). Thus we estimate that chromosome 22q11 deletions were the cause of at least 5% of congenital heart disease. As not all children with chromosome 22q11 deletions have a heart defect, this gives an estimated minimum prevalence of 1/4000 live births.« less
Individual-Based Completion Rates for Apprentices. Technical Paper
ERIC Educational Resources Information Center
Karmel, Tom
2011-01-01
Low completion rates for apprentices and trainees have received considerable attention recently and it has been argued that NCVER seriously understates completion rates. In this paper Tom Karmel uses NCVER data on recommencements to estimate individual-based completion rates. It is estimated that around one-quarter of trade apprentices swap…
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Inference for lidar-assisted estimation of forest growing stock volume
Ronald E. McRoberts; Erik Næsset; Terje Gobakken
2013-01-01
Estimates of growing stock volume are reported by the national forest inventories (NFI) of most countries and may serve as the basis for aboveground biomass and carbon estimates as required by an increasing number of international agreements. The probability-based (design-based) statistical estimators traditionally used by NFIs to calculate estimates are generally...
NASA Astrophysics Data System (ADS)
Lambrecht, Daniel S.; McCaslin, Laura; Xantheas, Sotiris S.; Epifanovsky, Evgeny; Head-Gordon, Martin
2012-10-01
This work reports refinements of the energetic ordering of the known low-energy structures of sulphate-water clusters ? (n = 3-6) using high-level electronic structure methods. Coupled cluster singles and doubles with perturbative triples (CCSD(T)) is used in combination with an estimate of basis set effects up to the complete basis set limit using second-order Møller-Plesset theory. Harmonic zero-point energy (ZPE), included at the B3LYP/6-311 + + G(3df,3pd) level, was found to have a significant effect on the energetic ordering. In fact, we show that the energetic ordering is a result of a delicate balance between the electronic and vibrational energies. Limitations of the ZPE calculations, both due to electronic structure errors, and use of the harmonic approximation, probably constitute the largest remaining errors. Due to the often small energy differences between cluster isomers, and the significant role of ZPE, deuteration can alter the relative energies of low-lying structures, and, when it is applied in conjunction with calculated harmonic ZPEs, even alters the global minimum for n = 5. Experiments on deuterated clusters, as well as more sophisticated vibrational calculations, may therefore be quite interesting.
Computer mapping of LANDSAT data for environmental applications
NASA Technical Reports Server (NTRS)
Rogers, R. H. (Principal Investigator); Mckeon, J. B.; Reed, L. E.; Schmidt, N. F.; Schecter, R. N.
1975-01-01
The author has identified the following significant results. Land cover overlays and maps produced from LANDSAT are providing information on existing land use and resources throughout the 208 study area. The overlays are being used to delineate drainage areas of a predominant land cover type. Information on cover type is also being combined with other pertinent data to develop estimates of sediment and nutrients flows from the drainage area. The LANDSAT inventory of present land cover together with population projects is providing a basis for developing maps of anticipated land use patterns required to evaluate impact on water quality which may result from these patterns. Overlays of forest types were useful for defining wildlife habitat and vegetational resources in the region. LANDSAT data and computer assisted interpretation was found to be a rapid cost effective procedure for inventorying land cover on a regional basis. The entire 208 inventory which include acquisition of ground truth, LANDSAT tapes, computer processing, and production of overlays and coded tapes was completed within a period of 2 months at a cost of about 0.6 cents per acre, a significant improvement in time and cost over conventional photointerpretation and mapping techniques.
Bayesian Retrieval of Complete Posterior PDFs of Oceanic Rain Rate From Microwave Observations
NASA Technical Reports Server (NTRS)
Chiu, J. Christine; Petty, Grant W.
2005-01-01
This paper presents a new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measurements Mission (TRMM) Microwave Imager (TMI) over the ocean, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes Theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance our understanding of theoretical benefits of the Bayesian approach, we have conducted sensitivity analyses based on two synthetic datasets for which the true conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak, due to saturation effects. It is also suggested that the choice of the estimators and the prior information are both crucial to the retrieval. In addition, the performance of our Bayesian algorithm is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data
NASA Astrophysics Data System (ADS)
Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.
2011-11-01
Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.
Lu, Qiongshi; Li, Boyang; Ou, Derek; Erlendsdottir, Margret; Powles, Ryan L; Jiang, Tony; Hu, Yiming; Chang, David; Jin, Chentian; Dai, Wei; He, Qidu; Liu, Zefeng; Mukherjee, Shubhabrata; Crane, Paul K; Zhao, Hongyu
2017-12-07
Despite the success of large-scale genome-wide association studies (GWASs) on complex traits, our understanding of their genetic architecture is far from complete. Jointly modeling multiple traits' genetic profiles has provided insights into the shared genetic basis of many complex traits. However, large-scale inference sets a high bar for both statistical power and biological interpretability. Here we introduce a principled framework to estimate annotation-stratified genetic covariance between traits using GWAS summary statistics. Through theoretical and numerical analyses, we demonstrate that our method provides accurate covariance estimates, thereby enabling researchers to dissect both the shared and distinct genetic architecture across traits to better understand their etiologies. Among 50 complex traits with publicly accessible GWAS summary statistics (N total ≈ 4.5 million), we identified more than 170 pairs with statistically significant genetic covariance. In particular, we found strong genetic covariance between late-onset Alzheimer disease (LOAD) and amyotrophic lateral sclerosis (ALS), two major neurodegenerative diseases, in single-nucleotide polymorphisms (SNPs) with high minor allele frequencies and in SNPs located in the predicted functional genome. Joint analysis of LOAD, ALS, and other traits highlights LOAD's correlation with cognitive traits and hints at an autoimmune component for ALS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Feller, David; Peterson, Kirk A
2013-08-28
The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.
Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking
ERIC Educational Resources Information Center
Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas
2012-01-01
This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.; Corradini, M.; Rempe, J.
The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCORmore » results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.« less
Farmer, M. T.; Corradini, M.; Rempe, J.; ...
2016-11-02
The U.S. Department of Energy (DOE) has played a major role in the U.S. response to the events at Fukushima Daiichi. During the first several weeks following the accident, U.S. assistance efforts were guided by results from a significant and diverse set of analyses. In the months that followed, a coordinated analysis activity aimed at gaining a more thorough understanding of the accident sequence was completed using laboratory-developed, system-level best-estimate accident analysis codes, while a parallel analysis was conducted by U.S. industry. A comparison of predictions for Unit 1 from these two studies indicated significant differences between MAAP and MELCORmore » results for key plant parameters, such as in-core hydrogen production. On that basis, a crosswalk was completed to determine the key modeling variations that led to these differences. In parallel with these activities, it became clear that there was a need to perform a technology gap evaluation on accident-tolerant components and severe accident analysis methodologies with the goal of identifying any data and/or knowledge gaps that may exist given the current state of light water reactor (LWR) severe accident research and augmented by insights from Fukushima. In addition, there is growing international recognition that data from Fukushima could significantly reduce uncertainties related to severe accident progression, particularly for boiling water reactors. On these bases, a group of U. S. experts in LWR safety and plant operations was convened by the DOE Office of Nuclear Energy (DOE-NE) to complete technology gap analysis and Fukushima forensics data needs identification activities. The results from these activities were used as the basis for refining DOE-NE's severe accident research and development (R&D) plan. Finally, this paper provides a high-level review of DOE-sponsored R&D efforts in these areas, including planned activities on accident-tolerant components and accident analysis methods.« less
Computational Challenges in Processing the Q1-Q16 Kepler Data Set
NASA Astrophysics Data System (ADS)
Klaus, Todd C.; Henze, C.; Twicken, J. D.; Hall, J.; McCauliff, S. D.; Girouard, F.; Cote, M.; Morris, R. L.; Clarke, B.; Jenkins, J. M.; Caldwell, D.; Kepler Science Operations Center
2013-10-01
Since launch on March 6th, 2009, NASA’s Kepler Space Telescope has collected 48 months of data on over 195,000 targets. The raw data are rife with instrumental and astrophysical noise that must be removed in order to detect and model the transit-like signals present in the data. Calibrating the raw pixels, generating and correcting the flux light curves, and detecting and characterizing the signals require significant computational power. In addition, the algorithms that make up the Kepler Science Pipeline and their parameters are still undergoing changes (most of which increase the computational cost), creating the need to reprocess the entire data set on a regular basis. We discuss how we have ported all of the core elements of the pipeline to the Pleiades cluster at the NASA Advanced Supercomputing (NAS) Division, the needs driving the port, and the technical challenges we faced. In 2011 we ported the Transiting Planet Search (TPS) and Data Validation (DV) modules to Pleiades. These pipeline modules operate on the full data set and the computational complexity increases roughly by the square of the number of data points. At the time of the port it had become infeasible to run these modules on our local hardware, necessitating the move to Pleiades. In 2012 and 2013 we turned our attention to the front end of the pipeline; Pixel-level Calibration (CAL), Photometric Analysis (PA), and Pre-Search Data Conditioning (PDC). Porting these modules to Pleiades will allow us to reprocess the complete data set on a more frequent basis. The last time we reprocessed all data for the front end we only had 24 months of data. We estimate that the full 48-month data set would take over 200 days to complete on local hardware. When the port is complete we expect to reprocess this data set on Pleiades in about a month. The NASA Science Mission Directorate provided funding for the Kepler Mission.
Helleringer, Stephane; Arhinful, Daniel; Abuaku, Benjamin; Humes, Michael; Wilson, Emily; Marsh, Andrew; Clermont, Adrienne; Black, Robert E; Bryce, Jennifer; Amouzou, Agbessi
2018-01-01
Reducing neonatal and child mortality is a key component of the health-related sustainable development goal (SDG), but most low and middle income countries lack data to monitor child mortality on an annual basis. We tested a mortality monitoring system based on the continuous recording of pregnancies, births and deaths by trained community-based volunteers (CBV). This project was implemented in 96 clusters located in three districts of the Northern Region of Ghana. Community-based volunteers (CBVs) were selected from these clusters and were trained in recording all pregnancies, births, and deaths among children under 5 in their catchment areas. Data collection lasted from January 2012 through September 2013. All CBVs transmitted tallies of recorded births and deaths to the Ghana Birth and deaths registry each month, except in one of the study districts (approximately 80% reporting). Some events were reported only several months after they had occurred. We assessed the completeness and accuracy of CBV data by comparing them to retrospective full pregnancy histories (FPH) collected during a census of the same clusters conducted in October-December 2013. We conducted all analyses separately by district, as well as for the combined sample of all districts. During the 21-month implementation period, the CBVs reported a total of 2,819 births and 137 under-five deaths. Among the latter, there were 84 infant deaths (55 neonatal deaths and 29 post-neonatal deaths). Comparison of the CBV data with FPH data suggested that CBVs significantly under-estimated child mortality: the estimated under-5 mortality rate according to CBV data was only 2/3 of the rate estimated from FPH data (95% Confidence Interval for the ratio of the two rates = 51.7 to 81.4). The discrepancies between the CBV and FPH estimates of infant and neonatal mortality were more limited, but varied significantly across districts. In northern Ghana, a community-based data collection systems relying on volunteers did not yield accurate estimates of child mortality rates. Additional implementation research is needed to improve the timeliness, completeness and accuracy of such systems. Enhancing pregnancy monitoring, in particular, may be an essential step to improve the measurement of neonatal mortality.
Melching, Charles S.; Meno, Michael W.
1998-01-01
As part of the World Meteorological Organization (WMO) project Intercomparison of Principal Hydrometric Instruments, Third Phase, a questionnaire was prepared by the U.S. Geological Survey (USGS) on the application of Ultrasonic Velocity Meters (UVM's) for flowmeasurement in streams, canals, and estuaries. In 1996, this questionnaire was distributed internationally by the WMO and USGS, and distributed within the United States by the USGS. Completed questionnaires were returned by 26 agencies in 7 countries (Canada, France, Germany, The Netherlands, Switzerland, the United Kingdom, and the United States). The completed questionnaires described geometric and streamflow conditions, system configurations, and reasons for applying UVM systems for 260 sites, thus providing information on the applicability of UVM systems throughout the world. The completed questionnaires also provided information on operational issues such as (1) methods used to determine and verify UVM ratings, (2) methods used to determine the mean flow velocity for UVM systems, (3) operational reliability of UVM systems, (4) methods to estimate missing data, (5) common problems with UVM systems and guidelines to mitigate these problems, and (6) personnel training issues. The completed questionnaires also described a few unique or novel applications of UVM systems. In addition to summarizing the completed questionnaires, this report includes a brief overview of UVM application and operation, and a short summary of current (1998) information from UVM system manufacturers regarding system cost and capabilities. On the basis of the information from the completed questionnaires and provided by the manufacturers, the general applicability of UVM systems is discussed. In the finalisation of this report the financial support provided by the US National Committee for Scientific Hydrology is gratefully acknowledged.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Kensaku, E-mail: moriken@md.tsukuba.ac.jp; Saida, Tsukasa; Shibuya, Yoko
Purpose: To compare the status of uterine and ovarian arteries after uterine artery embolization (UAE) in patients with incomplete and complete fibroid infarction via unenhanced 3D time-of-flight magnetic resonance (MR) angiography. Materials and Methods: Thirty-five consecutive women (mean age 43 years; range 26-52 years) with symptomatic uterine fibroids underwent UAE and MR imaging before and within 2 months after UAE. The patients were divided into incomplete and complete fibroid infarction groups on the basis of the postprocedural gadolinium-enhanced MR imaging findings. Two independent observers reviewed unenhanced MR angiography before and after UAE to determine bilateral uterine and ovarian arterial flowmore » scores. The total arterial flow scores were calculated by summing the scores of the 4 arteries. All scores were compared with the Mann-Whitney test. Results: Fourteen and 21 patients were assigned to the incomplete and complete fibroid infarction groups, respectively. The total arterial flow score in the incomplete fibroid infarction group was significantly greater than that in the complete fibroid infarction group (P = 0.019 and P = 0.038 for observers 1 and 2, respectively). In 3 patients, additional therapy was recommended for insufficient fibroid infarction. In 1 of the 3 patients, bilateral ovarian arteries were invisible before UAE but seemed enlarged after UAE. Conclusion: The total arterial flow from bilateral uterine and ovarian arteries in patients with incomplete fibroid infarction is less well reduced than in those with complete fibroid infarction. Postprocedural MR angiography provides useful information to estimate the cause of insufficient fibroid infarction in individual cases.« less
Synthetic Aperture Sonar Processing with MMSE Estimation of Image Sample Values
2016-12-01
UNCLASSIFIED/UNLIMITED 13. SUPPLEMENTARY NOTES 14. ABSTRACT MMSE (minimum mean- square error) target sample estimation using non-orthogonal basis...orthogonal, they can still be used in a minimum mean‐ square error (MMSE) estimator that models the object echo as a weighted sum of the multi‐aspect basis...problem. 3 Introduction Minimum mean‐ square error (MMSE) estimation is applied to target imaging with synthetic aperture
Growth and Yield Estimation for Loblolly Pine in the West Gulf
Paul A. Murphy; Herbert S. Sternitzke
1979-01-01
An equation system is developed to estimate current yield, projected basal area, and projected volume for merchantable natural stands on a per-acre basis. These estimates indicate yields that can be expected from woods-run conditions.
Involution and Difference Schemes for the Navier-Stokes Equations
NASA Astrophysics Data System (ADS)
Gerdt, Vladimir P.; Blinkov, Yuri A.
In the present paper we consider the Navier-Stokes equations for the two-dimensional viscous incompressible fluid flows and apply to these equations our earlier designed general algorithmic approach to generation of finite-difference schemes. In doing so, we complete first the Navier-Stokes equations to involution by computing their Janet basis and discretize this basis by its conversion into the integral conservation law form. Then we again complete the obtained difference system to involution with eliminating the partial derivatives and extracting the minimal Gröbner basis from the Janet basis. The elements in the obtained difference Gröbner basis that do not contain partial derivatives of the dependent variables compose a conservative difference scheme. By exploiting arbitrariness in the numerical integration approximation we derive two finite-difference schemes that are similar to the classical scheme by Harlow and Welch. Each of the two schemes is characterized by a 5×5 stencil on an orthogonal and uniform grid. We also demonstrate how an inconsistent difference scheme with a 3×3 stencil is generated by an inappropriate numerical approximation of the underlying integrals.
Thompson, R.S.; Anderson, K.H.; Bartlein, P.J.
2008-01-01
The method of modern analogs is widely used to obtain estimates of past climatic conditions from paleobiological assemblages, and despite its frequent use, this method involved so-far untested assumptions. We applied four analog approaches to a continental-scale set of bioclimatic and plant-distribution presence/absence data for North America to assess how well this method works under near-optimal modern conditions. For each point on the grid, we calculated the similarity between its vegetation assemblage and those of all other points on the grid (excluding nearby points). The climate of the points with the most similar vegetation was used to estimate the climate at the target grid point. Estimates based the use of the Jaccard similarity coefficient had smaller errors than those based on the use of a new similarity coefficient, although the latter may be more robust because it does not assume that the "fossil" assemblage is complete. The results of these analyses indicate that presence/absence vegetation assemblages provide a valid basis for estimating bioclimates on the continental scale. However, the accuracy of the estimates is strongly tied to the number of species in the target assemblage, and the analog method is necessarily constrained to produce estimates that fall within the range of observed values. We applied the four modern analog approaches and the mutual overlap (or "mutual climatic range") method to estimate bioclimatic conditions represented by the plant macrofossil assemblage from a packrat midden of Last Glacial Maximum age from southern Nevada. In general, the estimation approaches produced similar results in regard to moisture conditions, but there was a greater range of estimates for growing-degree days. Despite its limitations, the modern analog technique can provide paleoclimatic reconstructions that serve as the starting point to the interpretation of past climatic conditions.
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Brambilla, Donald J; O'Donnell, Amy B; Matsumoto, Alvin M; McKinlay, John B
2007-12-01
Estimates of intraindividual variation in hormone levels provide the basis for interpreting hormone measurements clinically and for developing eligibility criteria for trials of hormone replacement therapy. However, reliable systematic estimates of such variation are lacking. To estimate intraindividual variation of serum total, free and bioavailable testosterone (T), dihydrotestosterone (DHT), SHBG, LH, dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulphate (DHEAS), oestrone, oestradiol and cortisol, and the contributions of biological and assay variation to the total. Paired blood samples were obtained 1-3 days apart at entry and again 3 months and 6 months later (maximum six samples per subject). Each sample consisted of a pool of equal aliquots of two blood draws 20 min apart. Men aged 30-79 years were randomly selected from the respondents to the Boston Area Community Health Survey, a study of the health of the general population of Boston, MA, USA. Analysis was based on 132 men, including 121 who completed all six visits, 8 who completed the first two visits and 3 who completed the first four visits. Day-to-day and 3-month (long-term) intraindividual standard deviations, after transforming measurements to logarithms to eliminate the contribution of hormone level to intraindividual variation. Biological variation generally accounted for more of total intraindividual variation than did assay variation. Day-to-day biological variation accounted for more of the total than did long-term biological variation. Short-term variability was greater in hormones with pulsatile secretion (e.g. LH) than those that exhibit less ultradian variation. Depending on the hormone, the intraindividual standard deviations imply that a clinician can expect to see a difference exceeding 18-28% about half the time when two measurements are made on a subject. The difference will exceed 27-54% about a quarter of the time. Given the level of intraindividual variability in hormone levels found in this study, one sample is generally not sufficient to characterize an individual's hormone levels but collecting more than three is probably not warranted. This is true for clinical measurements and for hormone measurements used to determine eligibility for a clinical trial of hormone replacement therapy.
A partial list of southern clusters of galaxies
NASA Technical Reports Server (NTRS)
Quintana, H.; White, R. A.
1990-01-01
An inspection of 34 SRC/ESO J southern sky fields is the basis of the present list of clusters of galaxies and their approximate classifications in terms of cluster concentration, defined independently of richness and shape-symmetry. Where possible, an estimate of the cluster morphological population is provided. The Bautz-Morgan classification was applied using a strict comparison with clusters on the Palomar Sky Survey. Magnitudes were estimated on the basis of galaxies with photoelectric or photographic magnitudes.
NASA Astrophysics Data System (ADS)
Bacskay, George B.
2015-07-01
The equilibrium energies of the iodocarbenes CXI (X = Br, Cl, F) in their ?, ? and ? states and their atomisation and dissociation energies in the complete basis limit were determined by extrapolating valence correlated (R/U)CCSD(T) and Davidson corrected multi-reference configuration interaction (MRCI) energies calculated with the aug-cc-pVxZ (x = T,Q,5) basis sets and the ECP28MDF pseudopotential of iodine plus corrections for core and core-valence correlation, scalar relativity, spin-orbit coupling and zero-point energies. Spin-orbit energies were computed in a large basis of configurations chosen so as to accurately describe dissociation to the 3P and 2P states of C and of the halogens X and I, respectively. The computed singlet-triplet splittings are 13.6, 14.4 and 27.3 kcal mol-1 for X = Br, Cl and F, respectively. The enthalpies of formation at 0 K are predicted to be 97.4, 82.6 and 38.1 kcal mol-1 with estimated errors of ±1.0 kcal mol-1. The ? excitation energies (T00) in CBrI and CClI are calculated to be 41.1 and 41.7 kcal mol-1, respectively. The Renner-Teller intersections in both molecules are predicted to be substantially higher than the dissociation barriers on the ? surfaces. By contrast, in CFI the ? state is found to be unbound with respect to dissociation.
Optimization of replacement and inspection decisions for multiple components on a power system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauney, D.A.
1994-12-31
The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less
Valuation of pollinator forage services provided by Eucalyptus cladocalyx.
de Lange, Willem J; Veldtman, Ruan; Allsopp, Mike H
2013-08-15
We assess the monetary value of forage provisioning services for honeybees as provided by an alien tree species in the Western Cape province of South Africa. Although Eucalyptus cladocalyx is not an officially declared invader, it is cleared on a regular basis along with other invasive Eucalyptus species such as Eucalyptus camaldulensis, and Eucalyptus conferruminata (which have been prioritised for eradication in South Africa). We present some of the trade-offs associated with the clearing of E. cladocalyx by means of a practical example that illustrates a situation where the benefits of the species to certain stakeholders could support the containment of the species in demarcated areas, while allowing clearing outside such areas. Given the absence of market prices for such forage provisioning services, the replacement cost is used to present the value of the loss in forage as provided by E. cladocalyx if the alien tree species is cleared along with invasive alien tree species. Two replacement scenarios formed the basis for our calculations. The first scenario was an artificial diet as replacement for the forage provisioning service, which yielded a direct cost estimate of US$7.5 m per year. The second was based on a Fynbos cultivation/restoration initiative aimed at substituting the forage provisioning service of E. cladocalyx, which yielded a direct cost of US$20.2 m per year. These figures provide estimates of the potential additional cost burden on the beekeeping industry if E. cladocalyx is completely eradicated from the Western Cape. The cost estimates should be balanced against the negative impacts of E. cladocalyx on ecosystem services in order to make an informed decision with regard to appropriate management strategies for this species. The findings therefore serve as useful inputs to balance trade-offs for alien species that are considered as beneficial to some, but harmful to other. Copyright © 2013 Elsevier Ltd. All rights reserved.
Audio-visual speech cue combination.
Arnold, Derek H; Tear, Morgan; Schindel, Ryan; Roseboom, Warrick
2010-04-16
Different sources of sensory information can interact, often shaping what we think we have seen or heard. This can enhance the precision of perceptual decisions relative to those made on the basis of a single source of information. From a computational perspective, there are multiple reasons why this might happen, and each predicts a different degree of enhanced precision. Relatively slight improvements can arise when perceptual decisions are made on the basis of multiple independent sensory estimates, as opposed to just one. These improvements can arise as a consequence of probability summation. Greater improvements can occur if two initially independent estimates are summated to form a single integrated code, especially if the summation is weighted in accordance with the variance associated with each independent estimate. This form of combination is often described as a Bayesian maximum likelihood estimate. Still greater improvements are possible if the two sources of information are encoded via a common physiological process. Here we show that the provision of simultaneous audio and visual speech cues can result in substantial sensitivity improvements, relative to single sensory modality based decisions. The magnitude of the improvements is greater than can be predicted on the basis of either a Bayesian maximum likelihood estimate or a probability summation. Our data suggest that primary estimates of speech content are determined by a physiological process that takes input from both visual and auditory processing, resulting in greater sensitivity than would be possible if initially independent audio and visual estimates were formed and then subsequently combined.
[Violent computergames: distribution via and discussion on the Internet].
Nagenborg, Michael
2005-11-01
The spread and use of computer-games including (interactive) depictions of violence are considered a moral problem, particularly if played by children and youths. This essay expresses an opinion on H. Volper's (2004) demand of condemning certain contents by media ethics. At the same time, an overview on the spread and use of "violent games" by children and youths is offered. As a matter of fact, the share of these titles in the complete range must not be estimated too high, certain titles on the other hand are extremely wide-spread. Finally, Fritz's and Fehr's thesis of the cultural conflict "computer game" (2004) is discussed, demonstrated at the example of the discussion on the Internet, and on the basis of this thesis a mediating position between the two cultures including audience ethics (Funiok 1999) is presented.
Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin
2015-01-01
This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delatorre, Plínio; Departamento de Ciências Biológicas, Universidade Regional do Cariri, Crato, CE 63195-000; Nascimento, Kyria Santiago
2006-02-01
D. rostrata lectin was crystallized by hanging-drop vapor diffusion. The crystal belongs to the orthorhombic space group I222 and diffracted to 1.87 Å resolution. Lectins from the Diocleinae subtribe (Leguminosae) are highly similar proteins that promote various biological activities with distinctly differing potencies. The structural basis for this experimental data is not yet fully understood. Dioclea rostrata lectin was purified and crystallized by hanging-drop vapour diffusion at 293 K. The crystal belongs to the orthorhombic space group I222, with unit-cell parameters a = 61.51, b = 88.22, c = 87.76 Å. Assuming the presence of one monomer per asymmetric unit,more » the solvent content was estimated to be about 47.9%. A complete data set was collected at 1.87 Å resolution.« less
NASA Astrophysics Data System (ADS)
Filipenkov, V. V.; Rupeks, L. E.; Vitins, V. M.; Knets, I. V.; Kasyanov, V. A.
2017-07-01
New biocomposites and the cattle bone tissue were investigated. The composites were made from an endodontic cement (EC) and natural hydroxyapatite (NHAp.) The results of experiments performed by the method of infrared spectroscopy showed that protein was removed from the heat-treated specimens of bone tissue practically completely. The structure of bone tissue before and after deproteinization and the structure of the composite materials based on NHAp and EC (with different percentage) were investigated by the method of optical microscopy. The characteristics of mechanical properties (the initial elastic modulus, breaking tensile and compressive stresses, and breaking strain) and the density and porosity of these materials were determined. The new composite materials were implanted in the live tissue of rat. Biocompatibility between the live tissue and the new biocomposites was estimated.
Global surface displacement data for assessing variability of displacement at a point on a fault
Hecker, Suzanne; Sickler, Robert; Feigelson, Leah; Abrahamson, Norman; Hassett, Will; Rosa, Carla; Sanquini, Ann
2014-01-01
This report presents a global dataset of site-specific surface-displacement data on faults. We have compiled estimates of successive displacements attributed to individual earthquakes, mainly paleoearthquakes, at sites where two or more events have been documented, as a basis for analyzing inter-event variability in surface displacement on continental faults. An earlier version of this composite dataset was used in a recent study relating the variability of surface displacement at a point to the magnitude-frequency distribution of earthquakes on faults, and to hazard from fault rupture (Hecker and others, 2013). The purpose of this follow-on report is to provide potential data users with an updated comprehensive dataset, largely complete through 2010 for studies in English-language publications, as well as in some unpublished reports and abstract volumes.
Correlation consistent basis sets for lanthanides: The atoms La–Lu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples,more » CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.« less
Structure and binding energy of the H2S dimer at the CCSD(T) complete basis set limit.
Lemke, Kono H
2017-06-21
This study presents results for the binding energy and geometry of the H 2 S dimer which have been computed using Møller-Plesset perturbation theory (MP2, MP4) and coupled cluster (CCSD, CCSD(T)) calculations with basis sets up to aug-cc-pV5Z. Estimates of D e , E ZPE , D o , and dimer geometry have been obtained at each level of theory by taking advantage of the systematic convergence behavior toward the complete basis set (CBS) limit. The CBS limit binding energy values of D e are 1.91 (MP2), 1.75 (MP4), 1.41 (CCSD), and 1.69 kcal/mol (CCSD[T]). The most accurate values for the equilibrium S-S distance r SS (without counterpoise correction) are 4.080 (MP2/aug-cc-pV5Z), 4.131 (MP4/aug-cc-pVQZ), 4.225 (CCSD/aug-cc-pVQZ), and 4.146 Å (CCSD(T)/aug-cc-pVQZ). This study also evaluates the effect of counterpoise correction on the H 2 S dimer geometry and binding energy. As regards the structure of (H 2 S) 2 , MPn, CCSD, and CCSD(T) level values of r SS , obtained by performing geometry optimizations on the counterpoise-corrected potential energy surface, converge systematically to CBS limit values of 4.099 (MP2), 4.146 (MP4), 4.233 (CCSD), and 4.167 Å (CCSD(T)). The corresponding CBS limit values of the equilibrium binding energy D e are 1.88 (MP2), 1.76 (MP4), 1.41 (CCSD), and 1.69 kcal/mol (CCSD(T)), the latter in excellent agreement with the measured binding energy value of 1.68 ± 0.02 kcal/mol reported by Ciaffoni et al. [Appl. Phys. B 92, 627 (2008)]. Combining CBS electronic binding energies D e with E ZPE predicted by CCSD(T) vibrational second-order perturbation theory calculations yields D o = 1.08 kcal/mol, which is around 0.6 kcal/mol smaller than the measured value of 1.7 ± 0.3 kcal/mol. Overall, the results presented here demonstrate that the application of high level calculations, in particular CCSD(T), in combination with augmented correlation consistent basis sets provides valuable insight into the structure and energetics of the hydrogen sulfide dimer.
Structure and binding energy of the H2S dimer at the CCSD(T) complete basis set limit
NASA Astrophysics Data System (ADS)
Lemke, Kono H.
2017-06-01
This study presents results for the binding energy and geometry of the H2S dimer which have been computed using Møller-Plesset perturbation theory (MP2, MP4) and coupled cluster (CCSD, CCSD(T)) calculations with basis sets up to aug-cc-pV5Z. Estimates of De, EZPE, Do, and dimer geometry have been obtained at each level of theory by taking advantage of the systematic convergence behavior toward the complete basis set (CBS) limit. The CBS limit binding energy values of De are 1.91 (MP2), 1.75 (MP4), 1.41 (CCSD), and 1.69 kcal/mol (CCSD[T]). The most accurate values for the equilibrium S-S distance rSS (without counterpoise correction) are 4.080 (MP2/aug-cc-pV5Z), 4.131 (MP4/aug-cc-pVQZ), 4.225 (CCSD/aug-cc-pVQZ), and 4.146 Å (CCSD(T)/aug-cc-pVQZ). This study also evaluates the effect of counterpoise correction on the H2S dimer geometry and binding energy. As regards the structure of (H2S)2, MPn, CCSD, and CCSD(T) level values of rSS, obtained by performing geometry optimizations on the counterpoise-corrected potential energy surface, converge systematically to CBS limit values of 4.099 (MP2), 4.146 (MP4), 4.233 (CCSD), and 4.167 Å (CCSD(T)). The corresponding CBS limit values of the equilibrium binding energy De are 1.88 (MP2), 1.76 (MP4), 1.41 (CCSD), and 1.69 kcal/mol (CCSD(T)), the latter in excellent agreement with the measured binding energy value of 1.68 ± 0.02 kcal/mol reported by Ciaffoni et al. [Appl. Phys. B 92, 627 (2008)]. Combining CBS electronic binding energies De with EZPE predicted by CCSD(T) vibrational second-order perturbation theory calculations yields Do = 1.08 kcal/mol, which is around 0.6 kcal/mol smaller than the measured value of 1.7 ± 0.3 kcal/mol. Overall, the results presented here demonstrate that the application of high level calculations, in particular CCSD(T), in combination with augmented correlation consistent basis sets provides valuable insight into the structure and energetics of the hydrogen sulfide dimer.
Diminishing Adult Egocentrism when Estimating What Others Know
ERIC Educational Resources Information Center
Thomas, Ruthann C.; Jacoby, Larry L.
2013-01-01
People often use what they know as a basis to estimate what others know. This egocentrism can bias their estimates of others' knowledge. In 2 experiments, we examined whether people can diminish egocentrism when predicting for others. Participants answered general knowledge questions and then estimated how many of their peers would know the…
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.; Aster, R. W.; Firnett, P. J.; Miller, M. A.
1985-01-01
Improved Price Estimation Guidelines, IPEG4, program provides comparatively simple, yet relatively accurate estimate of price of manufactured product. IPEG4 processes user supplied input data to determine estimate of price per unit of production. Input data include equipment cost, space required, labor cost, materials and supplies cost, utility expenses, and production volume on industry wide or process wide basis.
Point Set Denoising Using Bootstrap-Based Radial Basis Function.
Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad
2016-01-01
This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.
ERIC Educational Resources Information Center
Fellini, Laetitia; Florian, Cedrick; Courtey, Julie; Roullet, Pascal
2009-01-01
Pattern completion is the ability to retrieve complete information on the basis of incomplete retrieval cues. Although it has been demonstrated that this cognitive capacity depends on the NMDA receptors (NMDA-Rs) of the hippocampal CA3 region, the role played by these glutamatergic receptors in the pattern completion process has not yet been…
Wiggins, Lisa; Christensen, Deborah L.; Maenner, Matthew J; Daniels, Julie; Warren, Zachary; Kurzius-Spencer, Margaret; Zahorodny, Walter; Robinson Rosenberg, Cordelia; White, Tiffany; Durkin, Maureen S.; Imm, Pamela; Nikolaou, Loizos; Yeargin-Allsopp, Marshalyn; Lee, Li-Ching; Harrington, Rebecca; Lopez, Maya; Fitzgerald, Robert T.; Hewitt, Amy; Pettygrove, Sydney; Constantino, John N.; Vehorn, Alison; Shenouda, Josephine; Hall-Lande, Jennifer; Van Naarden Braun, Kim; Dowling, Nicole F.
2018-01-01
Problem/Condition Autism spectrum disorder (ASD). Period Covered 2014. Description of System The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance system that provides estimates of the prevalence of autism spectrum disorder (ASD) among children aged 8 years whose parents or guardians reside within 11 ADDM sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). ADDM surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by professional service providers in the community. Staff completing record review and abstraction receive extensive training and supervision and are evaluated according to strict reliability standards to certify effective initial training, identify ongoing training needs, and ensure adherence to the prescribed methodology. Record review and abstraction occurs in a variety of data sources ranging from general pediatric health clinics to specialized programs serving children with developmental disabilities. In addition, most of the ADDM sites also review records for children who have received special education services in public schools. In the second phase of the study, all abstracted information is reviewed systematically by experienced clinicians to determine ASD case status. A child is considered to meet the surveillance case definition for ASD if he or she displays behaviors, as described on one or more comprehensive evaluations completed by community-based professional providers, consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) diagnostic criteria for autistic disorder; pervasive developmental disorder–not otherwise specified (PDD-NOS, including atypical autism); or Asperger disorder. This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which made considerable changes to ASD diagnostic criteria. The change in ASD diagnostic criteria might influence ADDM ASD prevalence estimates; therefore, most (85%) of the records used to determine prevalence estimates based on DSM-IV-TR criteria underwent additional review under a newly operationalized surveillance case definition for ASD consistent with the DSM-5 diagnostic criteria. Children meeting this new surveillance case definition could qualify on the basis of one or both of the following criteria, as documented in abstracted comprehensive evaluations: 1) behaviors consistent with the DSM-5 diagnostic features; and/or 2) an ASD diagnosis, whether based on DSM-IV-TR or DSM-5 diagnostic criteria. Stratified comparisons of the number of children meeting either of these two case definitions also are reported. Results For 2014, the overall prevalence of ASD among the 11 ADDM sites was 16.8 per 1,000 (one in 59) children aged 8 years. Overall ASD prevalence estimates varied among sites, from 13.1–29.3 per 1,000 children aged 8 years. ASD prevalence estimates also varied by sex and race/ethnicity. Males were four times more likely than females to be identified with ASD. Prevalence estimates were higher for non-Hispanic white (henceforth, white) children compared with non-Hispanic black (henceforth, black) children, and both groups were more likely to be identified with ASD compared with Hispanic children. Among the nine sites with sufficient data on intellectual ability, 31% of children with ASD were classified in the range of intellectual disability (intelligence quotient [IQ] <70), 25% were in the borderline range (IQ 71–85), and 44% had IQ scores in the average to above average range (i.e., IQ >85). The distribution of intellectual ability varied by sex and race/ethnicity. Although mention of developmental concerns by age 36 months was documented for 85% of children with ASD, only 42% had a comprehensive evaluation on record by age 36 months. The median age of earliest known ASD diagnosis was 52 months and did not differ significantly by sex or race/ethnicity. For the targeted comparison of DSM-IV-TR and DSM-5 results, the number and characteristics of children meeting the newly operationalized DSM-5 case definition for ASD were similar to those meeting the DSM-IV-TR case definition, with DSM-IV-TR case counts exceeding DSM-5 counts by less than 5% and approximately 86% overlap between the two case definitions (kappa = 0.85). Interpretation Findings from the ADDM Network, on the basis of 2014 data reported from 11 sites, provide updated population-based estimates of the prevalence of ASD among children aged 8 years in multiple communities in the United States. The overall ASD prevalence estimate of 16.8 per 1,000 children aged 8 years in 2014 is higher than previously reported estimates from the ADDM Network. Because the ADDM sites do not provide a representative sample of the entire United States, the combined prevalence estimates presented in this report cannot be generalized to all children aged 8 years in the United States. Consistent with reports from previous ADDM surveillance years, findings from 2014 were marked by variation in ASD prevalence when stratified by geographic area, sex, and level of intellectual ability. Differences in prevalence estimates between black and white children have diminished in most sites, but remained notable for Hispanic children. For 2014, results from application of the DSM-IV-TR and DSM-5 case definitions were similar, overall and when stratified by sex, race/ethnicity, DSM-IV-TR diagnostic subtype, or level of intellectual ability. Public Health Action Beginning with surveillance year 2016, the DSM-5 case definition will serve as the basis for ADDM estimates of ASD prevalence in future surveillance reports. Although the DSM-IV-TR case definition will eventually be phased out, it will be applied in a limited geographic area to offer additional data for comparison. Future analyses will examine trends in the continued use of DSM-IV-TR diagnoses, such as autistic disorder, PDD-NOS, and Asperger disorder in health and education records, documentation of symptoms consistent with DSM-5 terminology, and how these trends might influence estimates of ASD prevalence over time. The latest findings from the ADDM Network provide evidence that the prevalence of ASD is higher than previously reported estimates and continues to vary among certain racial/ethnic groups and communities. With prevalence of ASD ranging from 13.1 to 29.3 per 1,000 children aged 8 years in different communities throughout the United States, the need for behavioral, educational, residential, and occupational services remains high, as does the need for increased research on both genetic and nongenetic risk factors for ASD. PMID:29701730
Baio, Jon; Wiggins, Lisa; Christensen, Deborah L; Maenner, Matthew J; Daniels, Julie; Warren, Zachary; Kurzius-Spencer, Margaret; Zahorodny, Walter; Robinson Rosenberg, Cordelia; White, Tiffany; Durkin, Maureen S; Imm, Pamela; Nikolaou, Loizos; Yeargin-Allsopp, Marshalyn; Lee, Li-Ching; Harrington, Rebecca; Lopez, Maya; Fitzgerald, Robert T; Hewitt, Amy; Pettygrove, Sydney; Constantino, John N; Vehorn, Alison; Shenouda, Josephine; Hall-Lande, Jennifer; Van Naarden Braun, Kim; Dowling, Nicole F
2018-04-27
Autism spectrum disorder (ASD). 2014. The Autism and Developmental Disabilities Monitoring (ADDM) Network is an active surveillance system that provides estimates of the prevalence of autism spectrum disorder (ASD) among children aged 8 years whose parents or guardians reside within 11 ADDM sites in the United States (Arizona, Arkansas, Colorado, Georgia, Maryland, Minnesota, Missouri, New Jersey, North Carolina, Tennessee, and Wisconsin). ADDM surveillance is conducted in two phases. The first phase involves review and abstraction of comprehensive evaluations that were completed by professional service providers in the community. Staff completing record review and abstraction receive extensive training and supervision and are evaluated according to strict reliability standards to certify effective initial training, identify ongoing training needs, and ensure adherence to the prescribed methodology. Record review and abstraction occurs in a variety of data sources ranging from general pediatric health clinics to specialized programs serving children with developmental disabilities. In addition, most of the ADDM sites also review records for children who have received special education services in public schools. In the second phase of the study, all abstracted information is reviewed systematically by experienced clinicians to determine ASD case status. A child is considered to meet the surveillance case definition for ASD if he or she displays behaviors, as described on one or more comprehensive evaluations completed by community-based professional providers, consistent with the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) diagnostic criteria for autistic disorder; pervasive developmental disorder-not otherwise specified (PDD-NOS, including atypical autism); or Asperger disorder. This report provides updated ASD prevalence estimates for children aged 8 years during the 2014 surveillance year, on the basis of DSM-IV-TR criteria, and describes characteristics of the population of children with ASD. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which made considerable changes to ASD diagnostic criteria. The change in ASD diagnostic criteria might influence ADDM ASD prevalence estimates; therefore, most (85%) of the records used to determine prevalence estimates based on DSM-IV-TR criteria underwent additional review under a newly operationalized surveillance case definition for ASD consistent with the DSM-5 diagnostic criteria. Children meeting this new surveillance case definition could qualify on the basis of one or both of the following criteria, as documented in abstracted comprehensive evaluations: 1) behaviors consistent with the DSM-5 diagnostic features; and/or 2) an ASD diagnosis, whether based on DSM-IV-TR or DSM-5 diagnostic criteria. Stratified comparisons of the number of children meeting either of these two case definitions also are reported. For 2014, the overall prevalence of ASD among the 11 ADDM sites was 16.8 per 1,000 (one in 59) children aged 8 years. Overall ASD prevalence estimates varied among sites, from 13.1-29.3 per 1,000 children aged 8 years. ASD prevalence estimates also varied by sex and race/ethnicity. Males were four times more likely than females to be identified with ASD. Prevalence estimates were higher for non-Hispanic white (henceforth, white) children compared with non-Hispanic black (henceforth, black) children, and both groups were more likely to be identified with ASD compared with Hispanic children. Among the nine sites with sufficient data on intellectual ability, 31% of children with ASD were classified in the range of intellectual disability (intelligence quotient [IQ] <70), 25% were in the borderline range (IQ 71-85), and 44% had IQ scores in the average to above average range (i.e., IQ >85). The distribution of intellectual ability varied by sex and race/ethnicity. Although mention of developmental concerns by age 36 months was documented for 85% of children with ASD, only 42% had a comprehensive evaluation on record by age 36 months. The median age of earliest known ASD diagnosis was 52 months and did not differ significantly by sex or race/ethnicity. For the targeted comparison of DSM-IV-TR and DSM-5 results, the number and characteristics of children meeting the newly operationalized DSM-5 case definition for ASD were similar to those meeting the DSM-IV-TR case definition, with DSM-IV-TR case counts exceeding DSM-5 counts by less than 5% and approximately 86% overlap between the two case definitions (kappa = 0.85). Findings from the ADDM Network, on the basis of 2014 data reported from 11 sites, provide updated population-based estimates of the prevalence of ASD among children aged 8 years in multiple communities in the United States. The overall ASD prevalence estimate of 16.8 per 1,000 children aged 8 years in 2014 is higher than previously reported estimates from the ADDM Network. Because the ADDM sites do not provide a representative sample of the entire United States, the combined prevalence estimates presented in this report cannot be generalized to all children aged 8 years in the United States. Consistent with reports from previous ADDM surveillance years, findings from 2014 were marked by variation in ASD prevalence when stratified by geographic area, sex, and level of intellectual ability. Differences in prevalence estimates between black and white children have diminished in most sites, but remained notable for Hispanic children. For 2014, results from application of the DSM-IV-TR and DSM-5 case definitions were similar, overall and when stratified by sex, race/ethnicity, DSM-IV-TR diagnostic subtype, or level of intellectual ability. Beginning with surveillance year 2016, the DSM-5 case definition will serve as the basis for ADDM estimates of ASD prevalence in future surveillance reports. Although the DSM-IV-TR case definition will eventually be phased out, it will be applied in a limited geographic area to offer additional data for comparison. Future analyses will examine trends in the continued use of DSM-IV-TR diagnoses, such as autistic disorder, PDD-NOS, and Asperger disorder in health and education records, documentation of symptoms consistent with DSM-5 terminology, and how these trends might influence estimates of ASD prevalence over time. The latest findings from the ADDM Network provide evidence that the prevalence of ASD is higher than previously reported estimates and continues to vary among certain racial/ethnic groups and communities. With prevalence of ASD ranging from 13.1 to 29.3 per 1,000 children aged 8 years in different communities throughout the United States, the need for behavioral, educational, residential, and occupational services remains high, as does the need for increased research on both genetic and nongenetic risk factors for ASD.
Refusal bias in HIV prevalence estimates from nationally representative seroprevalence surveys.
Reniers, Georges; Eaton, Jeffrey
2009-03-13
To assess the relationship between prior knowledge of one's HIV status and the likelihood to refuse HIV testing in populations-based surveys and explore its potential for producing bias in HIV prevalence estimates. Using longitudinal survey data from Malawi, we estimate the relationship between prior knowledge of HIV-positive status and subsequent refusal of an HIV test. We use that parameter to develop a heuristic model of refusal bias that is applied to six Demographic and Health Surveys, in which refusal by HIV status is not observed. The model only adjusts for refusal bias conditional on a completed interview. Ecologically, HIV prevalence, prior testing rates and refusal for HIV testing are highly correlated. Malawian data further suggest that amongst individuals who know their status, HIV-positive individuals are 4.62 (95% confidence interval, 2.60-8.21) times more likely to refuse testing than HIV-negative ones. On the basis of that parameter and other inputs from the Demographic and Health Surveys, our model predicts downward bias in national HIV prevalence estimates ranging from 1.5% (95% confidence interval, 0.7-2.9) for Senegal to 13.3% (95% confidence interval, 7.2-19.6) for Malawi. In absolute terms, bias in HIV prevalence estimates is negligible for Senegal but 1.6 (95% confidence interval, 0.8-2.3) percentage points for Malawi. Downward bias is more severe in urban populations. Because refusal rates are higher in men, seroprevalence surveys also tend to overestimate the female-to-male ratio of infections. Prior knowledge of HIV status informs decisions to participate in seroprevalence surveys. Informed refusals may produce bias in estimates of HIV prevalence and the sex ratio of infections.
NASA Technical Reports Server (NTRS)
Chatterjee, Sharmista; Seagrave, Richard C.
1993-01-01
The objective of this paper is to present an estimate of the second law thermodynamic efficiency of the various units comprising an Environmental Control and Life Support System (ECLSS). The technique adopted here is based on an evaluation of the 'lost work' within each functional unit of the subsystem. Pertinent information for our analysis is obtained from a user interactive integrated model of an ECLSS. The model was developed using ASPEN. A potential benefit of this analysis is the identification of subsystems with high entropy generation as the most likely candidates for engineering improvements. This work has been motivated by the fact that the design objective for a long term mission should be the evaluation of existing ECLSS technologies not only the basis of the quantity of work needed for or obtained from each subsystem but also on the quality of work. In a previous study Brandhorst showed that the power consumption for partially closed and completely closed regenerable life support systems was estimated as 3.5 kw/individual and 10-12 kw/individual respectively. With the increasing cost and scarcity of energy resources, our attention is drawn to evaluate the existing ECLSS technologies on the basis of their energy efficiency. In general the first law efficiency of a system is usually greater than 50 percent. From literature, the second law efficiency is usually about 10 percent. The estimation of second law efficiency of the system indicates the percentage of energy degraded as irreversibilities within the process. This estimate offers more room for improvement in the design of equipment. From another perspective, our objective is to keep the total entropy production of a life support system as low as possible and still ensure a positive entropy gradient between the system and the surroundings. The reason for doing so is as the entropy production of the system increases, the entropy gradient between the system and the surroundings decreases, and the system will gradually approach equilibrium with the surroundings until it reaches the point where the entropy gradient is zero. At this point no work can be extracted from the system. This is called the 'dead state' of the system.
Phase III : GIS for the Appalachian Development Highway System 2007 cost to complete estimate
DOT National Transportation Integrated Search
2008-02-01
The proposed research will create an ADHS GIS for integrating and disseminating GIS and transportation data that will increase the accuracy and efficiency associated with completing the 2007 ADHS Cost to Complete Estimate. This project will create ap...
Assessing Child Lead Poisoning Case Ascertainment in the US, 1999-2010.
Roberts, Eric M; Madrigal, Daniel; Valle, Jhaqueline; King, Galatea; Kite, Linda
2017-05-01
To compare prevalence estimates for blood lead level ≥10.0 μg/dL (elevated blood lead level [EBLL]) with numbers reported to the Centers for Disease Control and Prevention (CDC) for children 12 months to 5 years of age from 1999 to 2010 on a state-by-state basis. State-specific prevalence estimates were generated based on the continuous NHANES according to newly available statistical protocols. Counts of case reports were based on the 39 states (including the District of Columbia) reporting to the CDC Childhood Lead Poisoning Prevention Program during the study period. Analyses were conducted both including and excluding states and years of nonreporting to the CDC. Approximately 1.2 million cases of EBLL are believed to have occurred in this period, but 607 000 (50%) were reported to the CDC. Including only states and years for which reporting was complete, the reporting rate was 64%. Pediatric care providers in 23 of 39 reporting states identified fewer than half of their children with EBLL. Although the greatest numbers of reported cases were from the Northeast and Midwest, the greatest numbers based on prevalence estimates occurred in the South. In southern and western states engaged in reporting, roughly 3 times as many children with EBLL were missed than were diagnosed. Based on the best available estimates, undertesting of blood lead levels by pediatric care providers appears to be endemic in many states. Copyright © 2017 by the American Academy of Pediatrics.
The energy content of wet corn distillers grains for lactating dairy cows.
Birkelo, C P; Brouk, M J; Schingoethe, D J
2004-06-01
Forty-five energy balances were completed with 12 multiparous, lactating Holstein cows in a study designed to determine the energy content of wet corn distillers grains. Treatments were applied in a repeated switchback design and consisted of total mixed diets containing 31.4% corn silage, 18.4% alfalfa hay, and either 30.7% rolled corn and 16.7% soybean meal or 17.0% rolled corn and 31.2% wet corn distillers grains (dry matter basis). Replacement of corn and soybean meal with wet corn distillers grains reduced dry matter intake 10.9% but did not affect milk production. Neither digestible nor metabolizable energy were affected by diet composition. Heat and milk energy output did not differ by diet, but body energy retained was 2.8 Mcal/d less in cows fed the wet corn distillers grains diet. Multiple regression estimates of maintenance metabolizable energy requirement and partial efficiencies of metabolizable energy used for lactation and body energy deposition did not differ by diet. Pooled estimates were 136.2, 0.66, and 0.85, kcal of metabolizable energy/ body weight0.75 per day, respectively. Calculated by difference, wet corn distillers grains was estimated to contain 4.09, 3.36, and 2.27 Mcal/kg of dry matter as digestible, metabolizable, and lactational net energy, respectively. These energy estimates were 7 to 11% and 10 to 15%, respectively, greater than those reported for dried corn distillers grains by the 1989 and 2001 dairy NRC publications.
Stine, O C; Smith, K D
1990-01-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive. PMID:2137963
Stine, O C; Smith, K D
1990-03-01
The effects of mutation, migration, random drift, and selection on the change in frequency of the alleles associated with Huntington disease, porphyria variegata, and lipoid proteinosis have been assessed in the Afrikaner population of South Africa. Although admixture cannot be completely discounted, it was possible to exclude migration and new mutation as major sources of changes in the frequency of these alleles by limiting analyses to pedigrees descendant from founding families. Calculations which overestimated the possible effect of random drift demonstrated that drift did not account for the observed changes in gene frequencies. Therefore these changes must have been caused by natural selection, and a coefficient of selection was estimated for each trait. For the rare, dominant, deleterious allele associated with Huntington disease, the coefficient of selection was estimated to be .34, indicating that this allele has a selective disadvantage, contrary to some recent studies. For the presumed dominant and probably deleterious allele associated with porphyria variegata, the coefficient of selection lies between .07 and .02. The coefficient of selection for the rare, clinically recessive allele associated with lipoid proteinosis was estimated to be .07. Calculations based on a model system indicate that the observed decrease in allele frequency cannot be explained solely on the basis of selection against the homozygote. Thus, this may be an example of a pleiotropic gene which has a dominant effect in terms of selection even though its known clinical effect is recessive.
Sun, Zhichao; Mukherjee, Bhramar; Estes, Jason P; Vokonas, Pantel S; Park, Sung Kyun
2017-08-15
Joint effects of genetic and environmental factors have been increasingly recognized in the development of many complex human diseases. Despite the popularity of case-control and case-only designs, longitudinal cohort studies that can capture time-varying outcome and exposure information have long been recommended for gene-environment (G × E) interactions. To date, literature on sampling designs for longitudinal studies of G × E interaction is quite limited. We therefore consider designs that can prioritize a subsample of the existing cohort for retrospective genotyping on the basis of currently available outcome, exposure, and covariate data. In this work, we propose stratified sampling based on summaries of individual exposures and outcome trajectories and develop a full conditional likelihood approach for estimation that adjusts for the biased sample. We compare the performance of our proposed design and analysis with combinations of different sampling designs and estimation approaches via simulation. We observe that the full conditional likelihood provides improved estimates for the G × E interaction and joint exposure effects over uncorrected complete-case analysis, and the exposure enriched outcome trajectory dependent design outperforms other designs in terms of estimation efficiency and power for detection of the G × E interaction. We also illustrate our design and analysis using data from the Normative Aging Study, an ongoing longitudinal cohort study initiated by the Veterans Administration in 1963. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Contractor Accounting, Reporting and Estimating (CARE).
Contractor Accounting Reporting and Estimating (CARE) provides check lists that may be used as guides in evaluating the accounting system, financial reporting , and cost estimating capabilities of the contractor. Experience gained from the Management Review Technique was used as a basis for the check lists. (Author)
An estimation of carrying capacity for sea otters along the California coast
Laidre, K.L.; Jameson, R.J.; DeMaster, D.P.
2001-01-01
Eggs of wild birds collected for the purpose of measuring concentrations of pesticides or other pollutants vary from nearly fresh to nearly dry so that objective comparisons cannot be made on the basis of weight of the contents at the time of collection. Residue concentrations in the nearly dry eggs can be greatly exaggerated by this artifact. Valid interpretation of residue data depends upon compensation for these losses. A method is presented for making adjustments on the basis of volume of the egg, and formulas are derived for estimating the volume of eggs of eagles, ospreys, and pelicans from egg measurements. The possibility of adjustments on the basis of percentage of moisture, solids, or fat in fresh eggs is discussed also.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, Renliang, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu; Dogandžić, Aleksandar, E-mail: Venliang@iastate.edu, E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of themore » density map image in the wavelet domain. This algorithm alternates between a Nesterov’s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.« less
Model construction of “earning money by taking photos”
NASA Astrophysics Data System (ADS)
Yang, Jingmei
2018-03-01
In the era of information, with the increasingly developed network, “to earn money by taking photos” is a self-service model under the mobile Internet. The user downloads the APP, registers as a member of the APP, and then takes a task that needs to take photographs from the APP and earns the reward of the task on the APP. The article uses the task data and membership information data of an already completed project, including the member’s location and reputation value. On the basis of reasonable assumption, the data was processed with the MATLAB, SPSS and Excel software. This article mainly studied problems of the function relationship between the task performance, task position (GPS latitude and GPS longitude) and task price of users, analyzed the project’s task pricing rules and the reasons why the task is not completed, and applied multivariate regression function and GeoQ software to analyze the data, studied the task pricing rules, applied the chart method to solve the complex data, clear and easy to understand, and also reality simulation is applied to analyze why the tasks are not completed. Also, compared with the previous program, a new task pricing program is designed for the project to obtain the confidence level by means of the SPSS software, to estimate the reasonable range of the task pricing, predict and design a new pricing program on the reasonable price range.
Reduced Order Methods for Prediction of Thermal-Acoustic Fatigue
NASA Technical Reports Server (NTRS)
Przekop, A.; Rizzi, S. A.
2004-01-01
The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing random nonlinear vibrations in a presence of thermal loading. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.
A Nonlinear Reduced Order Method for Prediction of Acoustic Fatigue
NASA Technical Reports Server (NTRS)
Przekop, Adam; Rizzi, Stephen A.
2006-01-01
The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing geometrically nonlinear random vibrations. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.
ERIC Educational Resources Information Center
Mueller, Christoph Emanuel; Gaus, Hansjoerg
2015-01-01
In this article, we test an alternative approach to creating a counterfactual basis for estimating individual and average treatment effects. Instead of using control/comparison groups or before-measures, the so-called Counterfactual as Self-Estimated by Program Participants (CSEPP) relies on program participants' self-estimations of their own…
The Nexus between the Above-Average Effect and Cooperative Learning in the Classroom
ERIC Educational Resources Information Center
Breneiser, Jennifer E.; Monetti, David M.; Adams, Katharine S.
2012-01-01
The present study examines the above-average effect (Chambers & Windschitl, 2004; Moore & Small, 2007) in assessments of task performance. Participants completed self-estimates of performance and group estimates of performance, before and after completing a task. Participants completed a task individually and in groups. Groups were…
THE SUCCESSIVE LINEAR ESTIMATOR: A REVISIT. (R827114)
This paper examines the theoretical basis of the successive linear estimator (SLE) that has been developed for the inverse problem in subsurface hydrology. We show that the SLE algorithm is a non-linear iterative estimator to the inverse problem. The weights used in the SLE al...
48 CFR 252.215-7009 - Proposal adequacy checklist.
Code of Federal Regulations, 2013 CFR
2013-10-01
... estimating relationships (labor hours or material) proposed on other than a discrete basis? 10. FAR 15.408..., applicable CLIN, Work Breakdown Structure, rationale for estimate, applicable history, and time-phasing)? 25...
Wu, Haipeng; Cao, Wanlin; Qiao, Qiyun; Dong, Hongying
2016-01-01
A method is presented to predict the complete stress-strain curves of concrete subjected to triaxial stresses, which were caused by axial load and lateral force. The stress can be induced due to the confinement action inside a special-shaped steel tube having multiple cavities. The existing reinforced confined concrete formulas have been improved to determine the confinement action. The influence of cross-sectional shape, of cavity construction, of stiffening ribs and of reinforcement in cavities has been considered in the model. The parameters of the model are determined on the basis of experimental results of an axial compression test for two different kinds of special-shaped concrete filled steel tube (CFT) columns with multiple cavities. The complete load-strain curves of the special-shaped CFT columns are estimated. The predicted concrete strength and the post-peak behavior are found to show good agreement within the accepted limits, compared with the experimental results. In addition, the parameters of proposed model are taken from two kinds of totally different CFT columns, so that it can be concluded that this model is also applicable to concrete confined by other special-shaped steel tubes. PMID:28787886
Wu, Haipeng; Cao, Wanlin; Qiao, Qiyun; Dong, Hongying
2016-01-29
A method is presented to predict the complete stress-strain curves of concrete subjected to triaxial stresses, which were caused by axial load and lateral force. The stress can be induced due to the confinement action inside a special-shaped steel tube having multiple cavities. The existing reinforced confined concrete formulas have been improved to determine the confinement action. The influence of cross-sectional shape, of cavity construction, of stiffening ribs and of reinforcement in cavities has been considered in the model. The parameters of the model are determined on the basis of experimental results of an axial compression test for two different kinds of special-shaped concrete filled steel tube (CFT) columns with multiple cavities. The complete load-strain curves of the special-shaped CFT columns are estimated. The predicted concrete strength and the post-peak behavior are found to show good agreement within the accepted limits, compared with the experimental results. In addition, the parameters of proposed model are taken from two kinds of totally different CFT columns, so that it can be concluded that this model is also applicable to concrete confined by other special-shaped steel tubes.
48 CFR 1852.215-85 - Proposal adequacy checklist.
Code of Federal Regulations, 2014 CFR
2014-10-01
... estimating relationships (labor hours or material) proposed on other than a discrete basis? 10. FAR 15.408... Breakdown Structure, rationale for estimate, applicable history, and time-phasing)? 23. FAR subpart 22.10 If...
Making sense of cancer risk calculators on the web.
Levy, Andrea Gurmankin; Sonnad, Seema S; Kurichi, Jibby E; Sherman, Melani; Armstrong, Katrina
2008-03-01
Cancer risk calculators on the internet have the potential to provide users with valuable information about their individual cancer risk. However, the lack of oversight of these sites raises concerns about low quality and inconsistent information. These concerns led us to evaluate internet cancer risk calculators. After a systematic search to find all cancer risk calculators on the internet, we reviewed the content of each site for information that users should seek to evaluate the quality of a website. We then examined the consistency of the breast cancer risk calculators by having 27 women complete 10 of the breast cancer risk calculators for themselves. We also completed the breast cancer risk calculators for a hypothetical high- and low-risk woman, and compared the output to Surveillance Epidemiology and End Results estimates for the average same-age and same-race woman. Nineteen sites were found, 13 of which calculate breast cancer risk. Most sites do not provide the information users need to evaluate the legitimacy of a website. The breast cancer calculator sites vary in the risk factors they assess to calculate breast cancer risk, how they operationalize each risk factor and in the risk estimate they provide for the same individual. Internet cancer risk calculators have the potential to provide a public health benefit by educating individuals about their risks and potentially encouraging preventive health behaviors. However, our evaluation of internet calculators revealed several problems that call into question the accuracy of the information that they provide. This may lead the users of these sites to make inappropriate medical decisions on the basis of misinformation.
Martin, Lisa J; Lee, Seung-Yeon; Couch, Sarah C; Morrison, John; Woo, Jessica G
2011-10-01
Obesity has a strong genetic basis, but the identification of genetic variants has not resulted in improved clinical care. However, phenotypes that influence weight, such as diet, may have shared underpinnings with obesity. Interestingly, diet also has a genetic basis. Thus, we hypothesized that the genetic underpinnings of diet may partially overlap with the genetics of obesity. Our objective was to determine whether dietary intake and BMI share heritable components in adulthood. We used a cross-sectional cohort of parents and adult offspring (n = 1410) from the Princeton Follow-up Study. Participants completed Block food-frequency questionnaires 15-27 y after sharing a household. Heritability of dietary intakes was estimated by using variance components analysis. Bivariate genetic analyses were used to estimate the shared effects between BMI and heritable dietary intakes. Fruit, vegetable, and protein consumption exhibited moderate heritability [(mean ± SE) 0.26 ± 0.06, 0.32 ± 0.06, and 0.21 ± 0.06, respectively; P < 0.001], but other dietary intakes were modest (h(2) < 0.2). Only fruit and vegetable consumption exhibited genetic correlations with BMI (ρ(g) = -0.28 ± 0.13 and -0.30 ± 0.13, respectively; P < 0.05). Phenotypic correlations with BMI were not significant. We showed that fruit, vegetable, and protein intakes are moderately heritable and that fruit and vegetable consumption shares underlying genetic effects with BMI in adulthood, which suggests that individuals genetically predisposed to low fruit and vegetable consumption may be predisposed to higher BMI. Thus, obese individuals who have low fruit and vegetable consumption may require targeted interventions that go beyond low-calorie, plant-based programs for weight management.
Code of Federal Regulations, 2014 CFR
2014-07-01
... otherwise, to establish the validity and competence of his estimates. He must familiarize himself with basic..., 1971, dictates that written statement of, and summary of the basis for, the amount of the estimate of... Appraisers which sets out market value “as the highest price estimated in terms of money which a property...
Code of Federal Regulations, 2010 CFR
2010-07-01
... otherwise, to establish the validity and competence of his estimates. He must familiarize himself with basic..., 1971, dictates that written statement of, and summary of the basis for, the amount of the estimate of... Appraisers which sets out market value “as the highest price estimated in terms of money which a property...
Code of Federal Regulations, 2013 CFR
2013-07-01
... otherwise, to establish the validity and competence of his estimates. He must familiarize himself with basic..., 1971, dictates that written statement of, and summary of the basis for, the amount of the estimate of... Appraisers which sets out market value “as the highest price estimated in terms of money which a property...
Code of Federal Regulations, 2012 CFR
2012-07-01
... otherwise, to establish the validity and competence of his estimates. He must familiarize himself with basic..., 1971, dictates that written statement of, and summary of the basis for, the amount of the estimate of... Appraisers which sets out market value “as the highest price estimated in terms of money which a property...
Code of Federal Regulations, 2011 CFR
2011-07-01
... otherwise, to establish the validity and competence of his estimates. He must familiarize himself with basic..., 1971, dictates that written statement of, and summary of the basis for, the amount of the estimate of... Appraisers which sets out market value “as the highest price estimated in terms of money which a property...
Elderly poverty and Supplemental Security Income.
Nicholas, Joyce; Wiseman, Michael
2009-01-01
In the United States, poverty is generally assessed on the basis of income, as reported in the Current Population Survey's (CPS's) Annual Social and Economic Supplement (ASEC), using an official poverty standard established in the 1960s. The prevalence of receipt of means-tested transfers is underreported in the CPS, with uncertain consequences for the measurement of poverty rates by both the official standard and by using alternative "relative" measures linked to the contemporaneous income distribution. The article reports results estimating the prevalence of poverty in 2002. We complete this effort by using a version of the 2003 CPS/ASEC for which a substantial majority (76 percent) of respondents have individual records matching administrative data from the Social Security Administration on earnings and receipt of income from the Old-Age, Survivors, and Disability Insurance and Supplemental Security Income (SSI) programs. Adjustment of the CPS income data with administrative data substantially improves coverage of SSI receipt. The consequence for general poverty is sensitive to the merge procedures employed, but under both sets of merge procedures considered, the estimated poverty rate among all elderly persons and among elderly SSI recipients is substantially less than rates estimated using the unadjusted CPS. The effect of the administrative adjustment is less significant for perception of relative poverty than for absolute poverty. We emphasize the effect of these adjustments on perception of poverty among the elderly in general and elderly SSI recipients in particular.
Tao, Yun; Chen, Sining; Hartl, Daniel L; Laurie, Cathy C
2003-01-01
The genetic basis of hybrid incompatibility in crosses between Drosophila mauritiana and D. simulans was investigated to gain insight into the evolutionary mechanisms of speciation. In this study, segments of the D. mauritiana third chromosome were introgressed into a D. simulans genetic background and tested as homozygotes for viability, male fertility, and female fertility. The entire third chromosome was covered with partially overlapping segments. Many segments were male sterile, while none were female sterile or lethal, confirming previous reports of the rapid evolution of hybrid male sterility (HMS). A statistical model was developed to quantify the HMS accumulation. In comparison with previous work on the X chromosome, we estimate that the X has approximately 2.5 times the density of HMS factors as the autosomes. We also estimate that the whole genome contains approximately 15 HMS "equivalents"-i.e., 15 times the minimum number of incompatibility factors necessary to cause complete sterility. Although some caveats for the quantitative estimate of a 2.5-fold density difference are described, this study supports the notion that the X chromosome plays a special role in the evolution of reproductive isolation. Possible mechanisms of a "large X" effect include selective fixation of new mutations that are recessive or partially recessive and the evolution of sex-ratio distortion systems. PMID:12930747
Tao, Yun; Chen, Sining; Hartl, Daniel L; Laurie, Cathy C
2003-08-01
The genetic basis of hybrid incompatibility in crosses between Drosophila mauritiana and D. simulans was investigated to gain insight into the evolutionary mechanisms of speciation. In this study, segments of the D. mauritiana third chromosome were introgressed into a D. simulans genetic background and tested as homozygotes for viability, male fertility, and female fertility. The entire third chromosome was covered with partially overlapping segments. Many segments were male sterile, while none were female sterile or lethal, confirming previous reports of the rapid evolution of hybrid male sterility (HMS). A statistical model was developed to quantify the HMS accumulation. In comparison with previous work on the X chromosome, we estimate that the X has approximately 2.5 times the density of HMS factors as the autosomes. We also estimate that the whole genome contains approximately 15 HMS "equivalents"-i.e., 15 times the minimum number of incompatibility factors necessary to cause complete sterility. Although some caveats for the quantitative estimate of a 2.5-fold density difference are described, this study supports the notion that the X chromosome plays a special role in the evolution of reproductive isolation. Possible mechanisms of a "large X" effect include selective fixation of new mutations that are recessive or partially recessive and the evolution of sex-ratio distortion systems.
Analysis of decommissioning costs for the AFRRI TRIGA reactor facility. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsbacka, M.; Moore, M.
1989-12-01
This report provides a cost analysis for decommissioning the Armed Forces Radiobiology Research Institute (AFRRI) TRIGA reactor facility. AFRRI is not suggesting that the AFRRI TRIGA reactor facility be decommissioned. This report was prepared in compliance with paragraph 50.33 of Title 10, Code of Federal Regulations, which requires that funding for the decommissioning of reactor facilities be available when licensed activities cease. The planned method of decommissioning is complete decontamination (DECON) of the AFRRI TRIGA reactor site to allow for restoration of the site to full public access. The cost of DECON in 1990 dollars is estimated to be $3,200,000.more » The anticipated ancillary costs of facility site demobilization and spent fuel shipment will be an additional $600,000. Thus, the total cost of terminating reactor operations at AFRRI will be about $3,800,000. The primary basis for developing this cost estimate was a study of the decommissioning costs of similar reactor facility performed by Battelle Pacific Northwest Laboratory, as provided in U.S. Nuclear Regulatory Commission publication NUREG/CR-1756. The data in this study were adapted to reflect the decommissioning requirements of the AFRRI TRIGA reactor facility.« less
Guay, Joel R.; Harmon, Jerry G.; McPherson, Kelly R.
1998-01-01
The damage caused by the January 1997 floods along the Cosumnes River and Deer Creek generated new interest in planning and managing land use in the study area. The 1997 floodflow peak, the highest on record and considered to be a 150-year flood, caused levee failures at 24 locations. In order to provide a technical basis for floodplain management practices, the U.S. Goelogical Survey, in cooperation with the Federal Emergency Management Agency, completed a flood-inundation map of the Cosumnes River and Deer Creek drainage from Dillard Road bridge to State Highway 99. Flood frequency was estimated from streamflow records for the Cosumnes River at Michigan Bar and Deer Creek near Sloughhouse. Cross sections along a study reach, where the two rivers generally flow parallel to one another, were used with a step-backwater model (WSPRO) to estimate the water-surface profile for floods of selected recurrence intervals. A flood-inundation map was developed to show flood boundaries for the 100-year flood. Water-surface profiles were developed for the 5-, 10-, 50-, 100-, and 500-year floods.
A 'periodic table' for protein structures.
Taylor, William R
2002-04-11
Current structural genomics programs aim systematically to determine the structures of all proteins coded in both human and other genomes, providing a complete picture of the number and variety of protein structures that exist. In the past, estimates have been made on the basis of the incomplete sample of structures currently known. These estimates have varied greatly (between 1,000 and 10,000; see for example refs 1 and 2), partly because of limited sample size but also owing to the difficulties of distinguishing one structure from another. This distinction is usually topological, based on the fold of the protein; however, in strict topological terms (neglecting to consider intra-chain cross-links), protein chains are open strings and hence are all identical. To avoid this trivial result, topologies are determined by considering secondary links in the form of intra-chain hydrogen bonds (secondary structure) and tertiary links formed by the packing of secondary structures. However, small additions to or loss of structure can make large changes to these perceived topologies and such subjective solutions are neither robust nor amenable to automation. Here I formalize both secondary and tertiary links to allow the rigorous and automatic definition of protein topology.
Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.
2013-01-01
Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283
ICESat-2: An overview of science objectives, status, data products and expected performance
NASA Astrophysics Data System (ADS)
Neumann, T.; Markus, T.; Anthony, M.
2016-12-01
NASA's Ice, Cloud, and land Elevation Satellite-2's (ICESat-2) mission objectives are to quantify polar ice sheet contributions to sea level change, quantify regional signatures of ice sheet changes to assess driving mechanisms, estimate sea ice thickness, and to enable measurements of canopy height as a basis for estimating large-scale biomass. With a scheduled launch date in late 2017 most of the flight hardware has been assembled, integrated and tested and algorithm implementation for its standard geophysical products is well underway. The spacecraft, built by Orbital ATK, is completed and is undergoing testing. ICESat-2's single instrument, the Advanced Topographic Laser Altimeter System (ATLAS), is built by NASA Goddard Space Flight Center and by the time of the Fall Meeting will be undergoing integration and testing with the spacecraft, becoming the ICESat-2 observatory. In parallel, high level geophysical data products and associated algorithms are in development using airborne laser altimeter data. This presentation will give an overview of the design of ICESat-2, of its hardware and software status, as well as examples of ICESat-2's coverage and what the data will look like.
Park, Eunji; Hwang, Dae-Sik; Lee, Jae-Seong; Song, Jun-Im; Seo, Tae-Kun; Won, Yong-Jin
2012-01-01
The phylum Cnidaria is comprised of remarkably diverse and ecologically significant taxa, such as the reef-forming corals, and occupies a basal position in metazoan evolution. The origin of this phylum and the most recent common ancestors (MRCAs) of its modern classes remain mostly unknown, although scattered fossil evidence provides some insights on this topic. Here, we investigate the molecular divergence times of the major taxonomic groups of Cnidaria (27 Hexacorallia, 16 Octocorallia, and 5 Medusozoa) on the basis of mitochondrial DNA sequences of 13 protein-coding genes. For this analysis, the complete mitochondrial genomes of seven octocoral and two scyphozoan species were newly sequenced and combined with all available mitogenomic data from GenBank. Five reliable fossil dates were used to calibrate the Bayesian estimates of divergence times. The molecular evidence suggests that cnidarians originated 741 million years ago (Ma) (95% credible region of 686-819), and the major taxa diversified prior to the Cambrian (543 Ma). The Octocorallia and Scleractinia may have originated from radiations of survivors of the Permian-Triassic mass extinction, which matches their fossil record well. Copyright © 2011 Elsevier Inc. All rights reserved.
2017-09-16
Detailed assessments of mortality patterns, particularly age-specific mortality, represent a crucial input that enables health systems to target interventions to specific populations. Understanding how all-cause mortality has changed with respect to development status can identify exemplars for best practice. To accomplish this, the Global Burden of Diseases, Injuries, and Risk Factors Study 2016 (GBD 2016) estimated age-specific and sex-specific all-cause mortality between 1970 and 2016 for 195 countries and territories and at the subnational level for the five countries with a population greater than 200 million in 2016. We have evaluated how well civil registration systems captured deaths using a set of demographic methods called death distribution methods for adults and from consideration of survey and census data for children younger than 5 years. We generated an overall assessment of completeness of registration of deaths by dividing registered deaths in each location-year by our estimate of all-age deaths generated from our overall estimation process. For 163 locations, including subnational units in countries with a population greater than 200 million with complete vital registration (VR) systems, our estimates were largely driven by the observed data, with corrections for small fluctuations in numbers and estimation for recent years where there were lags in data reporting (lags were variable by location, generally between 1 year and 6 years). For other locations, we took advantage of different data sources available to measure under-5 mortality rates (U5MR) using complete birth histories, summary birth histories, and incomplete VR with adjustments; we measured adult mortality rate (the probability of death in individuals aged 15-60 years) using adjusted incomplete VR, sibling histories, and household death recall. We used the U5MR and adult mortality rate, together with crude death rate due to HIV in the GBD model life table system, to estimate age-specific and sex-specific death rates for each location-year. Using various international databases, we identified fatal discontinuities, which we defined as increases in the death rate of more than one death per million, resulting from conflict and terrorism, natural disasters, major transport or technological accidents, and a subset of epidemic infectious diseases; these were added to estimates in the relevant years. In 47 countries with an identified peak adult prevalence for HIV/AIDS of more than 0·5% and where VR systems were less than 65% complete, we informed our estimates of age-sex-specific mortality using the Estimation and Projection Package (EPP)-Spectrum model fitted to national HIV/AIDS prevalence surveys and antenatal clinic serosurveillance systems. We estimated stillbirths, early neonatal, late neonatal, and childhood mortality using both survey and VR data in spatiotemporal Gaussian process regression models. We estimated abridged life tables for all location-years using age-specific death rates. We grouped locations into development quintiles based on the Socio-demographic Index (SDI) and analysed mortality trends by quintile. Using spline regression, we estimated the expected mortality rate for each age-sex group as a function of SDI. We identified countries with higher life expectancy than expected by comparing observed life expectancy to anticipated life expectancy on the basis of development status alone. Completeness in the registration of deaths increased from 28% in 1970 to a peak of 45% in 2013; completeness was lower after 2013 because of lags in reporting. Total deaths in children younger than 5 years decreased from 1970 to 2016, and slower decreases occurred at ages 5-24 years. By contrast, numbers of adult deaths increased in each 5-year age bracket above the age of 25 years. The distribution of annualised rates of change in age-specific mortality rate differed over the period 2000 to 2016 compared with earlier decades: increasing annualised rates of change were less frequent, although rising annualised rates of change still occurred in some locations, particularly for adolescent and younger adult age groups. Rates of stillbirths and under-5 mortality both decreased globally from 1970. Evidence for global convergence of death rates was mixed; although the absolute difference between age-standardised death rates narrowed between countries at the lowest and highest levels of SDI, the ratio of these death rates-a measure of relative inequality-increased slightly. There was a strong shift between 1970 and 2016 toward higher life expectancy, most noticeably at higher levels of SDI. Among countries with populations greater than 1 million in 2016, life expectancy at birth was highest for women in Japan, at 86·9 years (95% UI 86·7-87·2), and for men in Singapore, at 81·3 years (78·8-83·7) in 2016. Male life expectancy was generally lower than female life expectancy between 1970 and 2016, and the gap between male and female life expectancy increased with progression to higher levels of SDI. Some countries with exceptional health performance in 1990 in terms of the difference in observed to expected life expectancy at birth had slower progress on the same measure in 2016. Globally, mortality rates have decreased across all age groups over the past five decades, with the largest improvements occurring among children younger than 5 years. However, at the national level, considerable heterogeneity remains in terms of both level and rate of changes in age-specific mortality; increases in mortality for certain age groups occurred in some locations. We found evidence that the absolute gap between countries in age-specific death rates has declined, although the relative gap for some age-sex groups increased. Countries that now lead in terms of having higher observed life expectancy than that expected on the basis of development alone, or locations that have either increased this advantage or rapidly decreased the deficit from expected levels, could provide insight into the means to accelerate progress in nations where progress has stalled. Bill & Melinda Gates Foundation, and the National Institute on Aging and the National Institute of Mental Health of the National Institutes of Health. Copyright © 2017 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Maltz, Jonathan S.
2000-11-01
We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.
Scaled MP3 non-covalent interaction energies agree closely with accurate CCSD(T) benchmark data.
Pitonák, Michal; Neogrády, Pavel; Cerný, Jirí; Grimme, Stefan; Hobza, Pavel
2009-01-12
Scaled MP3 interaction energies calculated as a sum of MP2/CBS (complete basis set limit) interaction energies and scaled third-order energy contributions obtained in small or medium size basis sets agree very closely with the estimated CCSD(T)/CBS interaction energies for the 22 H-bonded, dispersion-controlled and mixed non-covalent complexes from the S22 data set. Performance of this so-called MP2.5 (third-order scaling factor of 0.5) method has also been tested for 33 nucleic acid base pairs and two stacked conformers of porphine dimer. In all the test cases, performance of the MP2.5 method was shown to be superior to the scaled spin-component MP2 based methods, e.g. SCS-MP2, SCSN-MP2 and SCS(MI)-MP2. In particular, a very balanced treatment of hydrogen-bonded compared to stacked complexes is achieved with MP2.5. The main advantage of the approach is that it employs only a single empirical parameter and is thus biased by two rigorously defined, asymptotically correct ab-initio methods, MP2 and MP3. The method is proposed as an accurate but computationally feasible alternative to CCSD(T) for the computation of the properties of various kinds of non-covalently bound systems.
40 CFR 228.15 - Dumping sites designated on a final basis.
Code of Federal Regulations, 2011 CFR
2011-07-01
... designation to establish different or additional standards. The EPA will act on any such petition within 120...) Disposal not subject to the restrictions in paragraphs (b)(4)(vi)(C) through (G) or (b)(4)(vi)(I) of this... prior to the completion of the DMMP and completed within two years after the completion of the DMMP. (J...
National Assessment of Geologic Carbon Dioxide Storage Resources -- Trends and Interpretations
NASA Astrophysics Data System (ADS)
Buursink, M. L.; Blondes, M. S.; Brennan, S.; Drake, R., II; Merrill, M. D.; Roberts-Ashby, T. L.; Slucher, E. R.; Warwick, P.
2013-12-01
In 2012, the U.S. Geological Survey (USGS) completed an assessment of the technically accessible storage resource (TASR) for carbon dioxide (CO2) in geologic formations underlying the onshore and State waters area of the United States. The formations assessed are at least 3,000 feet (914 meters) below the ground surface. The TASR is an estimate of the CO2 storage resource that may be available for CO2 injection and storage that is based on present-day geologic and hydrologic knowledge of the subsurface and current engineering practices. Individual storage assessment units (SAUs) for 36 basins or study areas were defined on the basis of geologic and hydrologic characteristics outlined in the USGS assessment methodology. The mean national TASR is approximately 3,000 metric gigatons. To augment the release of the assessment, this study reviews input estimates and output results as a part of the resource calculation. Included in this study are a collection of both cross-plots and maps to demonstrate our trends and interpretations. Alongside the assessment, the input estimates were examined for consistency between SAUs and cross-plotted to verify expected trends, such as decreasing storage formation porosity with increasing SAU depth, for instance, and to show a positive correlation between storage formation porosity and permeability estimates. Following the assessment, the output results were examined for correlation with selected input estimates. For example, there exists a positive correlation between CO2 density and the TASR, and between storage formation porosity and the TASR, as expected. These correlations, in part, serve to verify our estimates for the geologic variables. The USGS assessment concluded that the Coastal Plains Region of the eastern and southeastern United States contains the largest storage resource. Within the Coastal Plains Region, the storage resources from the U.S. Gulf Coast study area represent 59 percent of the national CO2 storage capacity. As part of this follow up study, additional maps were generated to show the geographic distribution of the input estimates and the output results across the U.S. For example, the distribution of the SAUs with fresh, saline or mixed formation water quality is shown. Also mapped is the variation in CO2 density as related to basin location and to related properties such as subsurface temperature and pressure. Furthermore, variation in the estimated SAU depth and resulting TASR are shown across the assessment study areas, and these depend on the geologic basin size and filling history. Ultimately, multiple map displays are possible with the complete data set of input estimates and range of reported results. The findings from this study show the effectiveness of the USGS methodology and the robustness of the assessment.
System and method for optical fiber based image acquisition suitable for use in turbine engines
Baleine, Erwan; A V, Varun; Zombo, Paul J.; Varghese, Zubin
2017-05-16
A system and a method for image acquisition suitable for use in a turbine engine are disclosed. Light received from a field of view in an object plane is projected onto an image plane through an optical modulation device and is transferred through an image conduit to a sensor array. The sensor array generates a set of sampled image signals in a sensing basis based on light received from the image conduit. Finally, the sampled image signals are transformed from the sensing basis to a representation basis and a set of estimated image signals are generated therefrom. The estimated image signals are used for reconstructing an image and/or a motion-video of a region of interest within a turbine engine.
The effect of rare alleles on estimated genomic relationships from whole genome sequence data.
Eynard, Sonia E; Windig, Jack J; Leroy, Grégoire; van Binsbergen, Rianne; Calus, Mario P L
2015-03-12
Relationships between individuals and inbreeding coefficients are commonly used for breeding decisions, but may be affected by the type of data used for their estimation. The proportion of variants with low Minor Allele Frequency (MAF) is larger in whole genome sequence (WGS) data compared to Single Nucleotide Polymorphism (SNP) chips. Therefore, WGS data provide true relationships between individuals and may influence breeding decisions and prioritisation for conservation of genetic diversity in livestock. This study identifies differences between relationships and inbreeding coefficients estimated using pedigree, SNP or WGS data for 118 Holstein bulls from the 1000 Bull genomes project. To determine the impact of rare alleles on the estimates we compared three scenarios of MAF restrictions: variants with a MAF higher than 5%, variants with a MAF higher than 1% and variants with a MAF between 1% and 5%. We observed significant differences between estimated relationships and, although less significantly, inbreeding coefficients from pedigree, SNP or WGS data, and between MAF restriction scenarios. Computed correlations between pedigree and genomic relationships, within groups with similar relationships, ranged from negative to moderate for both estimated relationships and inbreeding coefficients, but were high between estimates from SNP and WGS (0.49 to 0.99). Estimated relationships from genomic information exhibited higher variation than from pedigree. Inbreeding coefficients analysis showed that more complete pedigree records lead to higher correlation between inbreeding coefficients from pedigree and genomic data. Finally, estimates and correlations between additive genetic (A) and genomic (G) relationship matrices were lower, and variances of the relationships were larger when accounting for allele frequencies than without accounting for allele frequencies. Using pedigree data or genomic information, and including or excluding variants with a MAF below 5% showed significant differences in relationship and inbreeding coefficient estimates. Estimated relationships and inbreeding coefficients are the basis for selection decisions. Therefore, it can be expected that using WGS instead of SNP can affect selection decision. Inclusion of rare variants will give access to the variation they carry, which is of interest for conservation of genetic diversity.
NASA Astrophysics Data System (ADS)
Petrov, Vladislav; Ivanov, Alexandr; Barteneva, Svetlana; Snigiryeva, Galina; Shafirkin, Alexandr
Earth modeling of crewmember exposure should be performed for correct estimating radiation hazard during the flight. Such modeling was planned in a monkey experiment for investigating consequences of exposure to a man during an interplanetary flight. It should reflect a chronic impact of galactic cosmic rays and acute and fractional irradiation specified for solar cosmic rays and radiation belts respectively. Due to the difficulty of modeling a chronic impact with the help of a charged particles accelerator it can be used the gamma source. While irradiating big animal groups during a long-term period of time it is preferably to replace chronic irradiation by an equal fractional one. In this case the chosen characteristics of fractional irradiation should ensure the appearances of radiobiological consequences equal to the ones caused by the modeled chronic exposure. So for developing an exposure scheme in the monkey experiment (with Macaca -Rhesus) the model of the acting residual dose, that takes into account repair and recovery processes in the exposed body was used. The total dose value was in the limits from 2.32 Gy up to 3.5 Gy depending on the exposure character. The acting residual dose in all versions of exposure was 2.0 Gy for every monkey. While performing the experiment all the requirements of bioethics for the work with animals were observed. The objects of interest were genomic damages in lymphocytes of monkey's peripheral blood. The data about the CAF during the exposure and at various time moments after exposure particularly directly after the completion of chronicle and fractional irradiation were analyzed. CAF -dose of acute single gamma-irradiation in the range 0 -1.5Gy relationship (calibration curve) was defined in vitro. In addition the rate of the aberrant cells elimination within three months after the irradiation completion was estimated. On the basis of the obtained CAF data we performed verification of applicability of cytogenetic analysis for estimating the monkey gamma -dose exposure in the experiment It was obtained that this method permits to estimate the acting residual dose with accuracy of 30
Biominerlization and possible endosulfan degradation pathway adapted by Aspergillus niger.
Bhalerao, Tejomyee S
2013-11-28
Endosulfan is a chlorinated pesticide; its persistence in the environment and toxic effects on biota are demanding its removal. This study aims at improving the tolerance of the previously isolated fungus Aspergillus niger (A. niger) ARIFCC 1053 to endosulfan. Released chloride, dehalogenase activity, and released proteins were estimated along with analysis of endosulfan degradation and pathway identification. The culture could tolerate 1,000 mg/ml of technical grade endosulfan. Complete disappearance of endosulfan was seen after 168 h of incubation. The degradation study could easily be correlated with increase in released chlorides, dehalogenase activity and protein released. Comparative infrared spectral analysis suggested that the molecule of endosulfan was degraded efficiently by A. niger ARIFCC 1053. Obtained mass ion values by GC-MS suggested a hypothetical pathway during endosulfan degradation by A. niger ARIFCC 1053. All these results provide a basis for the development of bioremediation strategies to remediate the pollutant under study in the environment.
The Guderley problem revisited
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramsey, Scott D; Kamm, James R; Bolstad, John H
2009-01-01
The self-similar converging-diverging shock wave problem introduced by Guderley in 1942 has been the source of numerous investigations since its publication. In this paper, we review the simplifications and group invariance properties that lead to a self-similar formulation of this problem from the compressible flow equations for a polytropic gas. The complete solution to the self-similar problem reduces to two coupled nonlinear eigenvalue problems: the eigenvalue of the first is the so-called similarity exponent for the converging flow, and that of the second is a trajectory multiplier for the diverging regime. We provide a clear exposition concerning the reflected shockmore » configuration. Additionally, we introduce a new approximation for the similarity exponent, which we compare with other estimates and numerically computed values. Lastly, we use the Guderley problem as the basis of a quantitative verification analysis of a cell-centered, finite volume, Eulerian compressible flow algorithm.« less
USGS National Assessment of Oil and Gas Online (NOGA Online)
Biewick, L.H.
2003-01-01
The Central Energy Resources Team (CERT) of the U.S. Geological Survey is providing results of the USGS National Assessment of Oil and Gas online (NOGA Online). In addition to providing resource estimates and geologic reports, NOGA Online includes an internet map application that allows interactive viewing and analysis of assessment data and results. CERT is in the process of reassessing domestic oil and natural gas resources in a series of priority basins in the United States using a Total Petroleum System (TPS) approach where the assessment unit is the basic appraisal unit (rather than the oil and gas play used in the 1995 study). Assessments of undiscovered oil and gas resources in five such priority provinces were recently completed to meet the requirements of the Energy Policy and Conservation Act of 2000 (EPCA 2000). New assessment results are made available at this site on an ongoing basis.
NASA Technical Reports Server (NTRS)
Thanedar, B. D.
1972-01-01
A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.
The Doghouse Plot: History, Construction Techniques, and Application
NASA Astrophysics Data System (ADS)
Wilson, John Robert
The Doghouse Plot visually represents an aircraft's performance during combined turn-climb maneuvers. The Doghouse Plot completely describes the turn-climb capability of an aircraft; a single plot demonstrates the relationship between climb performance, turn rate, turn radius, stall margin, and bank angle. Using NASA legacy codes, Empirical Drag Estimation Technique (EDET) and Numerical Propulsion System Simulation (NPSS), it is possible to reverse engineer sufficient basis data for commercial and military aircraft to construct Doghouse Plots. Engineers and operators can then use these to assess their aircraft's full performance envelope. The insight gained from these plots can broaden the understanding of an aircraft's performance and, in turn, broaden the operational scope of some aircraft that would otherwise be limited by the simplifications found in their Airplane Flight Manuals (AFM). More importantly, these plots can build on the current standards of obstacle avoidance and expose risks in operation.
Assessing a computerized routine health information system in Mali using LQAS.
Stewart, J C; Schroeder, D G; Marsh, D R; Allhasane, S; Kone, D
2001-09-01
Between 1987 and 1998 Save the Children conducted a child survival programme in Mali with the goal of reducing maternal and child morbidity and mortality. An integral part of this programme was a computerized demographic surveillance and health information system (HIS) that gathered data on individuals on an on-going basis. To assess the overall coverage and quality of the data in the HIS, to identify specific health districts that needed improvements in data collection methods, and to determine particular areas of weakness in data collection. Random samples of 20 mothers with children <5 years were selected in each of 14 health districts. Mothers were interviewed about pregnancies, live births, deaths of children <5, and children's growth monitoring and immunization status. The Lot Quality Assurance Method (LQAS) was used to identify districts in which records and interview results did not meet predetermined levels of acceptability. Data collected in the interviews were combined to estimate overall coverage and quality. When all variables were analyzed, all 14 lots were rejected, and it was estimated that 52% of all events occurring in the community were registered in ProMIS. Much of this poor performance was due to immunization and growth monitoring data, which were not updated due to printer problems. Coverage of events increased (92%) when immunizations and growth monitoring were excluded, and no lots were rejected. When all variables were analyzed for quality of data recorded, six lots were rejected and the overall estimation was 83%. With immunizations and growth monitoring excluded, overall quality was 86% and no lots were rejected. The comprehensive computerized HIS did not meet expectations. This may be due, in part, to the ambitious objective of complete and intensive monitoring of a large population without adequate staff and equipment. Future efforts should consider employing a more targeted and streamlined HIS so that data can be more complete and useful.
NASA Technical Reports Server (NTRS)
Deshler, T.; Snider, J. R.; Vali, G.
1998-01-01
Under the support of this grant a balloon-borne gondola containing a variety of aerosol instruments was developed and flown from Laramie, Wyoming, (41 deg N, 105 deg W) and from Lauder, New Zealand (45 deg S, 170 deg E). The gondola includes instruments to measure the concentrations of condensation nuclei (CN), cloud condensation nuclei (CCN), optically detectable aerosol (OA.) (r greater than or equal to 0.15 - 2.0 microns), and optical scattering properties using a nephelometer (lambda = 530 microns). All instruments sampled from a common inlet which was heated to 40 C on ascent and to 160 C on descent. Flights with the CN counter, OA counter, and nephelometer began in July 1994. The CCN counter was added in November 1994, and the engineering problems were solved by June 1995. Since then the flights have included all four instruments, and were completed in January 1998. Altogether there were 20 flights from Laramie, approximately 5 per year, and 2 from Lauder. Of these there were one or more engineering problems on 6 of the flights from Laramie, hence the data are somewhat limited on those 6 flights, while a complete data set was obtained from the other 14 flights. Good CCN data are available from 12 of the Laramie flights. The two flights from Lauder in January 1998 were successful for all measurements. The results from these flights, and the development of the balloon-bome CCN counter have formed the basis for five conference presentations. The heated and unheated CN and OA measurements have been used to estimate the mass fraction of the aerosol volatile, while comparisons of the nephelometer measurements were used to estimate the light scattering, associated with the volatile aerosol. These estimates were calculated for 0.5 km averages of the ascent and descent data between 2.5 km and the tropopause, near 11.5 km.
Zhang, Adah S.; Ostrom, Quinn T.; Kruchko, Carol; Rogers, Lisa; Peereboom, David M.
2017-01-01
Abstract Background. Complete prevalence proportions illustrate the burden of disease in a population. This study estimates the 2010 complete prevalence of malignant primary brain tumors overall and by Central Brain Tumor Registry of the United States (CBTRUS) histology groups, and compares the brain tumor prevalence estimates to the complete prevalence of other common cancers as determined by the Surveillance, Epidemiology, and End Results Program (SEER) by age at prevalence (2010): children (0–14 y), adolescent and young adult (AYA) (15–39 y), and adult (40+ y). Methods. Complete prevalence proportions were estimated using a novel regression method extended from the Completeness Index Method, which combines survival and incidence data from multiple sources. In this study, two datasets, CBTRUS and SEER, were used to calculate complete prevalence estimates of interest. Results. Complete prevalence for malignant primary brain tumors was 47.59/100000 population (22.31, 48.49, and 57.75/100000 for child, AYA, and adult populations). The most prevalent cancers by age were childhood leukemia (36.65/100000), AYA melanoma of the skin (66.21/100000), and adult female breast (1949.00/100000). The most prevalent CBTRUS histologies in children and AYA were pilocytic astrocytoma (6.82/100000, 5.92/100000), and glioblastoma (12.76/100000) in adults. Conclusions. The relative impact of malignant primary brain tumors is higher among children than any other age group; it emerges as the second most prevalent cancer among children. Complete prevalence estimates for primary malignant brain tumors fills a gap in overall cancer knowledge, which provides critical information toward public health and health care planning, including treatment, decision making, funding, and advocacy programs. PMID:28039365
Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin
2011-06-07
The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics
A robust component mode synthesis method for stochastic damped vibroacoustics
NASA Astrophysics Data System (ADS)
Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine
2010-01-01
In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.
Chao, Shih-Wei; Li, Arvin Huang-Te; Chao, Sheng D
2009-09-01
Intermolecular interaction energy data for the methane dimer have been calculated at a spectroscopic accuracy and employed to construct an ab initio potential energy surface (PES) for molecular dynamics (MD) simulations of fluid methane properties. The full potential curves of the methane dimer at 12 symmetric conformations were calculated by the supermolecule counterpoise-corrected second-order Møller-Plesset (MP2) perturbation theory. Single-point coupled cluster with single and double and perturbative triple excitations [CCSD(T)] calculations were also carried out to calibrate the MP2 potentials. We employed Pople's medium size basis sets [up to 6-311++G(3df, 3pd)] and Dunning's correlation consistent basis sets (cc-pVXZ and aug-cc-pVXZ, X = D, T, Q). For each conformer, the intermolecular carbon-carbon separation was sampled in a step 0.1 A for a range of 3-9 A, resulting in a total of 732 configuration points calculated. The MP2 binding curves display significant anisotropy with respect to the relative orientations of the dimer. The potential curves at the complete basis set (CBS) limit were estimated using well-established analytical extrapolation schemes. A 4-site potential model with sites located at the hydrogen atoms was used to fit the ab initio potential data. This model stems from a hydrogen-hydrogen repulsion mechanism to explain the stability of the dimer structure. MD simulations using the ab initio PES show quantitative agreements on both the atom-wise radial distribution functions and the self-diffusion coefficients over a wide range of experimental conditions. Copyright 2008 Wiley Periodicals, Inc.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
A new class of methods for functional connectivity estimation
NASA Astrophysics Data System (ADS)
Lin, Wutu
Measuring functional connectivity from neural recordings is important in understanding processing in cortical networks. The covariance-based methods are the current golden standard for functional connectivity estimation. However, the link between the pair-wise correlations and the physiological connections inside the neural network is unclear. Therefore, the power of inferring physiological basis from functional connectivity estimation is limited. To build a stronger tie and better understand the relationship between functional connectivity and physiological neural network, we need (1) a realistic model to simulate different types of neural recordings with known ground truth for benchmarking; (2) a new functional connectivity method that produce estimations closely reflecting the physiological basis. In this thesis, (1) I tune a spiking neural network model to match with human sleep EEG data, (2) introduce a new class of methods for estimating connectivity from different kinds of neural signals and provide theory proof for its superiority, (3) apply it to simulated fMRI data as an application.
Twenty-One Reasons to Care about the Psychological Basis of Ownership
ERIC Educational Resources Information Center
Friedman, Ori; Ross, Hildy
2011-01-01
Within psychology, most aspects of ownership have received scant attention or have been overlooked completely. In this chapter, the authors outline 21 reasons why it will be important (and interesting) to understand the psychological basis of ownership of property, including its developmental origins: (1) Daily life; (2) A human universal, and…
Shapira, Gali; Yodfat, Ofer; HaCohen, Arava; Feigin, Paul; Rubin, Richard
2010-01-01
Background Optimal continuous subcutaneous insulin infusion (CSII) therapy emphasizes the relationship between insulin dose and carbohydrate consumption. One widely used tool (bolus calculator) requires the user to enter discrete carbohydrate values; however, many patients might not estimate carbohydrates accurately. This study assessed carbohydrate estimation accuracy in type 1 diabetes CSII users and compared simulated blood glucose (BG) outcomes using the bolus calculator and the “bolus guide,” an alternative system based on ranges of carbohydrate load. Methods Patients (n = 60) estimated the carbohydrate load of a representative sample of meals of known carbohydrate value. The estimated error distribution [coefficient of variation (CV)] was the basis for a computer simulation (n = 1.6 million observations) of insulin recommendations for the bolus guide and bolus calculator, translated into outcome blood glucose (OBG) ranges (≤60, 61–200, >201 mg/dl). Patients (n = 30) completed questionnaires assessing satisfaction with the bolus guide. Results The CV of typical meals ranged from 27.9% to 44.5%. The percentage of simulated OBG for the calculator and the bolus guide in the <60 mg/dl range were 20.8% and 17.2%, respectively, and 13.8% and 15.8%, respectively, in the >200 mg/dl range. The mean and median scores of all bolus guide satisfaction items and ease of learning and use were 4.17 and 4.2, respectively (of 5.0). Conclusion The bolus guide recommendation based on carbohydrate range selection is substantially similar to the calculator based on carbohydrate point estimation and appears to be highly accepted by type 1 diabetes insulin pump users. PMID:20663453
Ignoffo, Robert; Knapp, Katherine; Barnett, Mitchell; Barbour, Sally Yowell; D'Amato, Steve; Iacovelli, Lew; Knudsen, Jasen; Koontz, Susannah E; Mancini, Robert; McBride, Ali; McCauley, Dayna; Medina, Patrick; O'Bryant, Cindy L; Scarpace, Sarah; Stricker, Steve; Trovato, James A
2016-04-01
With an aging US population, the number of patients who need cancer treatment will increase significantly by 2020. On the basis of a predicted shortage of oncology physicians, nonphysician health care practitioners will need to fill the shortfall in oncology patient visits, and nurse practitioners and physician assistants have already been identified for this purpose. This study proposes that appropriately trained oncology pharmacists can also contribute. The purpose of this study is to estimate the supply of Board of Pharmacy Specialties-certified oncology pharmacists (BCOPs) and their potential contribution to the care of patients with cancer through 2020. Data regarding accredited oncology pharmacy residencies, new BCOPs, and total BCOPs were used to estimate oncology residencies, new BCOPs, and total BCOPs through 2020. A Delphi panel process was used to estimate patient visits, identify patient care services that BCOPs could provide, and study limitations. By 2020, there will be an estimated 3,639 BCOPs, and approximately 62% of BCOPs will have completed accredited oncology pharmacy residencies. Delphi panelists came to consensus (at least 80% agreement) on eight patient care services that BCOPs could provide. Although the estimates given by our model indicate that BCOPs could provide 5 to 7 million 30-minute patient visits annually, sensitivity analysis, based on factors that could reduce potential visit availability resulted in 2.5 to 3.5 million visits by 2020 with the addition of BCOPs to the health care team. BCOPs can contribute to a projected shortfall in needed patient visits for cancer treatment. BCOPs, along with nurse practitioners and physician assistants could substantially reduce, but likely not eliminate, the shortfall of providers needed for oncology patient visits. Copyright © 2016 by American Society of Clinical Oncology.
Impacts of irrigation on groundwater depletion in the North China Plain
NASA Astrophysics Data System (ADS)
Ge, Yuqi; Lei, Huimin
2017-04-01
Groundwater resources is an essential water supply for agriculture in the North China Plain (NCP) which is one of the most important food production areas in China. In the past decades, excessive groundwater-fed irrigation in this area has caused sharp decline in groundwater table. However, accurate monitoring on the net groundwater exploitation is still difficult, mainly due to a lack of complete groundwater exploitation monitoring network. This hinders an accurate evaluation of the effects of agricultural managements on shallow groundwater table. In this study, we use an existing method to estimate the net irrigation amount at the county level, and evaluate the effects of current agricultural management on groundwater depletion. We apply this method in five typical counties in the NCP to estimate annual net irrigation amount from 2002 to 2015, based on meteorological data (2002-2015) and remote sensing ET data (2002-2015) . First, an agro-hydrological model (Soil-Water-Atmosphere-Plant, SWAP) is calibrated and validated at field scale based on the measured data from flux towers. Second, the model is established at reginal scale by spatial discretization. Third, we use an optimization tool (Parameter ESTimation, PEST) to optimize the irrigation parameter in SWAP so as the simulated evapotranspiration (ET) by SWAP is closest to the remote sensing ET. We expect that the simulated irrigation amount from the optimized parameter is the estimated net irrigation amount. Finally, the contribution of agricultural management to the observed groundwater depletion is assessed by calculating the groundwater balance which considers the estimated net irrigation amount, observed lateral groundwater, rainfall recharge, deep seepage, evaporation from phreatic water and domestic water use. The study is expected to give a scientific basis for alleviating the over-exploitation of groundwater resources in the area.
Locally indistinguishable orthogonal product bases in arbitrary bipartite quantum system
Xu, Guang-Bao; Yang, Ying-Hui; Wen, Qiao-Yan; Qin, Su-Juan; Gao, Fei
2016-01-01
As we know, unextendible product basis (UPB) is an incomplete basis whose members cannot be perfectly distinguished by local operations and classical communication. However, very little is known about those incomplete and locally indistinguishable product bases that are not UPBs. In this paper, we first construct a series of orthogonal product bases that are completable but not locally distinguishable in a general m ⊗ n (m ≥ 3 and n ≥ 3) quantum system. In particular, we give so far the smallest number of locally indistinguishable states of a completable orthogonal product basis in arbitrary quantum systems. Furthermore, we construct a series of small and locally indistinguishable orthogonal product bases in m ⊗ n (m ≥ 3 and n ≥ 3). All the results lead to a better understanding of the structures of locally indistinguishable product bases in arbitrary bipartite quantum system. PMID:27503634
Adaptive Sparse Representation for Source Localization with Gain/Phase Errors
Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin
2011-01-01
Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875
UAV Control on the Basis of 3D Landmark Bearing-Only Observations.
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-11-27
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks' position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, James M.; Prescott, Ryan; Dawson, Jericah M.
2014-11-01
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, based on leveraging a fully funded, Sandia executed NDC Modernization project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOTmore » intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.« less
NASA Astrophysics Data System (ADS)
Pradhan, Moumita; Pradhan, Dinesh; Bandyopadhyay, G.
2010-10-01
Fuzzy System has demonstrated their ability to solve different kinds of problem in various application domains. There is an increasing interest to apply fuzzy concept to improve tasks of any system. Here case study of a thermal power plant is considered. Existing time estimation represents time to complete tasks. Applying fuzzy linear approach it becomes clear that after each confidence level least time is taken to complete tasks. As time schedule is less than less amount of cost is needed. Objective of this paper is to show how one system becomes more efficient in applying Fuzzy Linear approach. In this paper we want to optimize the time estimation to perform all tasks in appropriate time schedules. For the case study, optimistic time (to), pessimistic time (tp), most likely time(tm) is considered as data collected from thermal power plant. These time estimates help to calculate expected time(te) which represents time to complete particular task to considering all happenings. Using project evaluation and review technique (PERT) and critical path method (CPM) concept critical path duration (CPD) of this project is calculated. This tells that the probability of fifty percent of the total tasks can be completed in fifty days. Using critical path duration and standard deviation of the critical path, total completion of project can be completed easily after applying normal distribution. Using trapezoidal rule from four time estimates (to, tm, tp, te), we can calculate defuzzyfied value of time estimates. For range of fuzzy, we consider four confidence interval level say 0.4, 0.6, 0.8,1. From our study, it is seen that time estimates at confidence level between 0.4 and 0.8 gives the better result compared to other confidence levels.
Ronald E. McRoberts; Mark D. Nelson; Daniel G. Wendt
2002-01-01
For two large study areas in Minnesota, USA, stratified estimation using classified Landsat Thematic Mapper satellite imagery as the basis for stratification was used to estimate forest area. Measurements of forest inventory plots obtained for a 12-month period in 1998 and 1999 were used as the source of data for within-stratum estimates. These measurements further...
Stratified estimates of forest area using the k-nearest neighbors technique and satellite imagery
Ronald E. McRoberts; Mark D. Nelson; Daniel Wendt
2002-01-01
For two study areas in Minnesota, stratified estimation using Landsat Thematic Mapper satellite imagery as the basis for stratification was used to estimate forest area. Measurements of forest inventory plots obtained for a 12-month period in 1998 and 1999 were used as the source of data for within-strata estimates. These measurements further served as calibration data...
The risk of microbial keratitis with overnight corneal reshaping lenses.
Bullimore, Mark A; Sinnott, Loraine T; Jones-Jordan, Lisa A
2013-09-01
To estimate the incidence of microbial keratitis (MK) associated with overnight corneal reshaping contact lenses and to compare rates in children and adults. A retrospective study of randomly selected practitioners, stratified by order volume and lens company, was conducted. Practitioners were invited to participate and those agreeing were asked to provide deidentified patient information for up to 50 lens orders and to complete a comprehensive event form for any of these patients who have attended an unscheduled visit for a painful red eye. Duration of contact lens wear was calculated from the original fitting date or January 2005 (whichever was later) to when the patient was last seen by the practitioner wearing the lenses on a regular basis. Cases of MK were classified by majority decision of a 5-member expert panel. For the 191 practitioners who could be contacted, 119 (62%) agreed to participate. Subsequently, 11 withdrew, 22 did not respond, and 86 (43%) returned completed forms corresponding to 2202 lens orders and 1494 patients. Limiting the sample to those patients with at least 3 months of documented contact lens wear since 2005 resulted in a sample of 1317 patients; 640 adults (49%) and 677 children (51%) representing 2599 patient-years of wear (adults = 1164; children = 1435). Eight events of corneal infiltrates associated with a painful red eye were reported (six in children and two in adults). Two were classified as MK. Both occurred in children but neither resulted in a loss of visual acuity. The overall estimated incidence of MK is 7.7 per 10,000 years of wear (95% confidence interval [CI] = 0.9 to 27.8). For children, the estimated incidence of MK is 13.9 per 10,000 patient-years (95% CI = 1.7 to 50.4). For adults, the estimated incidence of MK is 0 per 10,000 patient-years (95% CI = 0 to 31.7). The risk of MK with overnight corneal reshaping contact lenses is similar to that with other overnight modalities. The fact that the CIs for the rates estimated overlap should not be interpreted as evidence of no difference. True differences fewer than 50 cases per 10,000 patient-years were beyond the study's power of detection.
Liu, Yangfan; Bolton, J Stuart
2016-08-01
The (Cartesian) multipole series, i.e., the series comprising monopole, dipoles, quadrupoles, etc., can be used, as an alternative to the spherical or cylindrical wave series, in representing sound fields in a wide range of problems, such as source radiation, sound scattering, etc. The proofs of the completeness of the spherical and cylindrical wave series in these problems are classical results, and it is also generally agreed that the Cartesian multipole series spans the same space as the spherical waves: a rigorous mathematical proof of that statement has, however, not been presented. In the present work, such a proof of the completeness of the Cartesian multipole series, both in two and three dimensions, is given, and the linear dependence relations among different orders of multipoles are discussed, which then allows one to easily extract a basis from the multipole series. In particular, it is concluded that the multipoles comprising the two highest orders in the series form a basis of the whole series, since the multipoles of all the lower source orders can be expressed as a linear combination of that basis.
Estimating the risk of a scuba diving fatality in Australia.
Lippmann, John; Stevenson, Christopher; McD Taylor, David; Williams, Jo
2016-12-01
There are few data available on which to estimate the risk of death for Australian divers. This report estimates the risk of a scuba diving fatality for Australian residents, international tourists diving in Queensland, and clients of a large Victorian dive operator. Numerators for the estimates were obtained from the Divers Alert Network Asia-Pacific dive fatality database. Denominators were derived from three sources: Participation in Exercise, Recreation and Sport Surveys, 2001-2010 (Australian resident diving activity data); Tourism Research Australia surveys of international visitors to Queensland 2006-2014 and a dive operator in Victoria 2007-2014. Annual fatality rates (AFR) and 95% confidence intervals (95% CI) were calculated using an exact binomial test. Estimated AFRs were: 0.48 (0.37-0.59) deaths per 100,000 dives, or 8.73 (6.85-10.96) deaths per 100,000 divers for Australian residents; 0.12 (0.05-0.25) deaths per 100,000 dives, or 0.46 (0.20-0.91) deaths per 100,000 divers for international visitors to Queensland; and 1.64 (0.20-5.93) deaths per 100,000 dives for the dive operator in Victoria. On a per diver basis, Australian residents are estimated to be almost twenty times more likely to die whilst scuba diving than are international visitors to Queensland, or to lower than fourfold on a per dive basis. On a per dive basis, divers in Victoria are fourteen times more likely to die than are Queensland international tourists. Although some of the estimates are based on potentially unreliable denominator data extrapolated from surveys, the diving fatality rates in Australia appear to vary by State, being considerably lower in Queensland than in Victoria. These estimates are similar to or lower than comparable overseas estimates, although reliability of all such measurements varies with study size and accuracy of the data available.
River Runoff Estimates on the Basis of Satellite-Derived Surface Currents and Water Levels
NASA Astrophysics Data System (ADS)
Gruenler, S.; Romeiser, R.; Stammer, D.
2007-12-01
One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, permits current measurements by ATI in an experimental mode of operation. Based on numerical simulations, we present first findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated and a dedicated data synthesis system for river discharge estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test sites. High-resolution three- dimensional current fields in the Elbe river (Germany) from a numerical model of the German Federal Waterways Engineering and Research Institute (BAW) are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. For example, runoff estimates on the basis of measured surface current fields and river widths from TerraSAR-X and water levels from radar altimetry are simulated. Despite the simplicity of some of the applied methods, the results provide quite comprehensive pictures of the Elbe river runoff dynamics. Although the satellite-based river runoff estimates exhibit a lower accuracy in comparison to traditional gauge measurements, the proposed measuring strategies are quite promising for the monitoring of river discharge dynamics in regions where only sparse in-situ measurements are available. We discuss the applicability to a number of major rivers around the world.
Joint amalgamation of most parsimonious reconciled gene trees
Scornavacca, Celine; Jacox, Edwin; Szöllősi, Gergely J.
2015-01-01
Motivation: Traditionally, gene phylogenies have been reconstructed solely on the basis of molecular sequences; this, however, often does not provide enough information to distinguish between statistically equivalent relationships. To address this problem, several recent methods have incorporated information on the species phylogeny in gene tree reconstruction, leading to dramatic improvements in accuracy. Although probabilistic methods are able to estimate all model parameters but are computationally expensive, parsimony methods—generally computationally more efficient—require a prior estimate of parameters and of the statistical support. Results: Here, we present the Tree Estimation using Reconciliation (TERA) algorithm, a parsimony based, species tree aware method for gene tree reconstruction based on a scoring scheme combining duplication, transfer and loss costs with an estimate of the sequence likelihood. TERA explores all reconciled gene trees that can be amalgamated from a sample of gene trees. Using a large scale simulated dataset, we demonstrate that TERA achieves the same accuracy as the corresponding probabilistic method while being faster, and outperforms other parsimony-based methods in both accuracy and speed. Running TERA on a set of 1099 homologous gene families from complete cyanobacterial genomes, we find that incorporating knowledge of the species tree results in a two thirds reduction in the number of apparent transfer events. Availability and implementation: The algorithm is implemented in our program TERA, which is freely available from http://mbb.univ-montp2.fr/MBB/download_sources/16__TERA. Contact: celine.scornavacca@univ-montp2.fr, ssolo@angel.elte.hu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25380957
Krompecher, T
1994-10-21
The development of the intensity of rigor mortis was monitored in nine groups of rats. The measurements were initiated after 2, 4, 5, 6, 8, 12, 15, 24, and 48 h post mortem (p.m.) and lasted 5-9 h, which ideally should correspond to the usual procedure after the discovery of a corpse. The experiments were carried out at an ambient temperature of 24 degrees C. Measurements initiated early after death resulted in curves with a rising portion, a plateau, and a descending slope. Delaying the initial measurement translated into shorter rising portions, and curves initiated 8 h p.m. or later are comprised of a plateau and/or a downward slope only. Three different phases were observed suggesting simple rules that can help estimate the time since death: (1) if an increase in intensity was found, the initial measurements were conducted not later than 5 h p.m.; (2) if only a decrease in intensity was observed, the initial measurements were conducted not earlier than 7 h p.m.; and (3) at 24 h p.m., the resolution is complete, and no further changes in intensity should occur. Our results clearly demonstrate that repeated measurements of the intensity of rigor mortis allow a more accurate estimation of the time since death of the experimental animals than the single measurement method used earlier. A critical review of the literature on the estimation of time since death on the basis of objective measurements of the intensity of rigor mortis is also presented.
Measuring diet cost at the individual level: a comparison of three methods.
Monsivais, P; Perrigue, M M; Adams, S L; Drewnowski, A
2013-11-01
Household-level food spending data are not suitable for population-based studies of the economics of nutrition. This study compared three methods of deriving diet cost at the individual level. Adult men and women (n=164) completed 4-day diet diaries and a food frequency questionnaire (FFQ). Food expenditures over 4 weeks and supermarket prices for 384 foods were obtained. Diet costs (US$/day) were estimated using: (1) diet diaries and expenditures; (2) diet diaries and supermarket prices; and (3) FFQs and supermarket prices. Agreement between the three methods was assessed on the basis of Pearson correlations and limits of agreement. Income-related differences in diet costs were estimated using general linear models. Diet diaries yielded mean (s.d.) diet costs of $10.04 (4.27) based on Method 1 and $8.28 (2.32) based on Method 2. FFQs yielded mean diet costs of $7.66 (2.72) based on Method 3. Correlations between energy intakes and costs were highest for Method 3 (r(2)=0.66), lower for Method 2 (r(2)=0.24) and lowest for Method 1 (r(2)=0.06). Cost estimates were significantly associated with household incomes. The weak association between food expenditures and food intake using Method 1 makes it least suitable for diet and health research. However, merging supermarket food prices with standard dietary assessment tools can provide estimates of individual diet cost that are more closely associated with food consumed. The derivation of individual diet cost can provide insights into some of the economic determinants of food choice, diet quality and health.
Robust estimation for ordinary differential equation models.
Cao, J; Wang, L; Xu, J
2011-12-01
Applied scientists often like to use ordinary differential equations (ODEs) to model complex dynamic processes that arise in biology, engineering, medicine, and many other areas. It is interesting but challenging to estimate ODE parameters from noisy data, especially when the data have some outliers. We propose a robust method to address this problem. The dynamic process is represented with a nonparametric function, which is a linear combination of basis functions. The nonparametric function is estimated by a robust penalized smoothing method. The penalty term is defined with the parametric ODE model, which controls the roughness of the nonparametric function and maintains the fidelity of the nonparametric function to the ODE model. The basis coefficients and ODE parameters are estimated in two nested levels of optimization. The coefficient estimates are treated as an implicit function of ODE parameters, which enables one to derive the analytic gradients for optimization using the implicit function theorem. Simulation studies show that the robust method gives satisfactory estimates for the ODE parameters from noisy data with outliers. The robust method is demonstrated by estimating a predator-prey ODE model from real ecological data. © 2011, The International Biometric Society.
Exploring soils and ecohydrological structure in small watersheds using electromagnetic induction
USDA-ARS?s Scientific Manuscript database
Soil moisture sensors generally strive to use the real permittivity as the basis for estimating soil water content from measured electrical properties of soil. It has been shown that a reasonably good general calibration can be developed for mineral soils on this basis. However, at the low measureme...
Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Burken, John; Ishihara, Abraham
2011-01-01
This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1998-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application
NASA Technical Reports Server (NTRS)
Phan, Minh Q.
1997-01-01
This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.
Advanced, Low/Zero Emission Boiler Design and Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babcock /Wilcox; Illinois State Geological; Worley Parsons
2007-06-30
In partnership with the U.S. Department of Energy's National Energy Technology Laboratory, B&W and Air Liquide are developing and optimizing the oxy-combustion process for retrofitting existing boilers as well as new plants. The main objectives of the project is to: (1) demonstrate the feasibility of the oxy-combustion technology with flue gas recycle in a 5-million Btu/hr coal-fired pilot boiler, (2) measure its performances in terms of emissions and boiler efficiency while selecting the right oxygen injection and flue gas recycle strategies, and (3) perform technical and economic feasibility studies for application of the technology in demonstration and commercial scale boilers.more » This document summarizes the work performed during the period of performance of the project (Oct 2002 to June 2007). Detailed technical results are reported in corresponding topical reports that are attached as an appendix to this report. Task 1 (Site Preparation) has been completed in 2003. The experimental pilot-scale O{sub 2}/CO{sub 2} combustion tests of Task 2 (experimental test performance) has been completed in Q2 2004. Process simulation and cost assessment of Task 3 (Techno-Economic Study) has been completed in Q1 2005. The topical report on Task 3 has been finalized and submitted to DOE in Q3 2005. The calculations of Task 4 (Retrofit Recommendation and Preliminary Design of a New Generation Boiler) has been completed in 2004. In Task 6 (engineering study on retrofit applications), the engineering study on 25MW{sub e} unit has been completed in Q2, 2008 along with the corresponding cost assessment. In Task 7 (evaluation of new oxy-fuel power plants concepts), based on the design basis document prepared in 2005, the design and cost estimate of the Air Separation Units, the boiler islands and the CO{sub 2} compression and trains have been completed, for both super and ultra-supercritical case study. Final report of Task-7 is published by DOE in Oct 2007.« less
ERIC Educational Resources Information Center
Wenjuan, Hao; Rui, Liang
2016-01-01
Teaching is a spiral rising process. A complete teaching should be composed of five parts: theoretical basis, goal orientation, operating procedures, implementation conditions and assessment. On the basis of the genre knowledge, content-based approach and process approach, this text constructs the Teaching Model of College Writing Instruction, in…
40 CFR 63.2855 - How do I determine the quantity of oilseed processed?
Code of Federal Regulations, 2010 CFR
2010-07-01
... oilseed measurements must be determined on an as received basis, as defined in § 63.2872. The as received... accounting month rather than a calendar month basis, and you have 12 complete accounting months of approximately equal duration in a calendar year, you may substitute the accounting month time interval for the...
Code of Federal Regulations, 2011 CFR
2011-07-01
... matter (PM) in excess of: (i) 0.30 pound per ton of feed (dry basis) to the kiln if construction... conducted by § 60.8 is completed, you may not discharge into the atmosphere from any clinker cooler any gases which: (1) Contain PM in excess of: (i) 0.10 pound per ton of feed (dry basis) to the kiln if...
Code of Federal Regulations, 2012 CFR
2012-07-01
... matter (PM) in excess of: (i) 0.30 pound per ton of feed (dry basis) to the kiln if construction... conducted by § 60.8 is completed, you may not discharge into the atmosphere from any clinker cooler any gases which: (1) Contain PM in excess of: (i) 0.10 pound per ton of feed (dry basis) to the kiln if...
NASA Astrophysics Data System (ADS)
Kislyakov, M. A.; Chernov, V. A.; Maksimkin, V. L.; Bozhin, Yu. M.
2017-12-01
The article deals with modern methods of monitoring the state and predicting the life of electric machines. In 50% of the cases of failure in the performance of electric machines is associated with insulation damage. As promising, nondestructive methods of control, methods based on the investigation of the processes of polarization occurring in insulating materials are proposed. To improve the accuracy of determining the state of insulation, a multiparametric approach is considered, which is a basis for the development of an expert system for estimating the state of health.
Sound propagation elements in evaluation of en route noise of advanced turbofan aircraft
NASA Technical Reports Server (NTRS)
Sutherland, Louis C.; Wesler, John
1990-01-01
Cruise noise from an advanced turboprop aircraft is reviewed on the basis of available wind tunnel data to estimate the aircraft noise signature at the source. Available analytical models are used to evaluate the sound levels at the ground. The analysis allows reasonable estimates to be made of the community noise levels that might be generated during cruise by such aircraft, provides the basis for preliminary comparisons with available data on noise of existing aircraft during climb and helps to identify the dominant elements of the sound propagation models applicable to this situation.
Herbert A. Knight; Joe P. McClure
1966-01-01
This report presents the principal findings of the third Forest Survey of North Carolina's timber resource. The survey, conducted by the Southeastern Forest Experiment Station, was begun in August 1961 and completed in November 1964. Results of two previous surveys, completed in 1938 and 1958 provide the basis for evaluating and interpreting the significance of...
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
2015-08-01
McCullagh, P.; Nelder, J.A. Generalized Linear Model , 2nd ed.; Chapman and Hall: London, 1989. 7. Johnston, J. Econometric Methods, 3rd ed.; McGraw...FOR A DOSE-RESPONSE MODEL ECBC-TN-068 Kyong H. Park Steven J. Lagan RESEARCH AND TECHNOLOGY DIRECTORATE August 2015 Approved for public release...Likelihood Estimation Method for Completely Separated and Quasi-Completely Separated Data for a Dose-Response Model 5a. CONTRACT NUMBER 5b. GRANT
Estimating aspen volume and weight for individual trees, diameter classes, or entire stands.
Bryce E. Schlaegel
1975-01-01
Presents allometric volume and weight equations for Minnesota quaking aspen. Volume, green weight, and dry weight estimates can be made for wood, bark, and limbs on the basis of individual trees, diameter classes, or entire stands.
The 2-24 μm source counts from the AKARI North Ecliptic Pole survey
NASA Astrophysics Data System (ADS)
Murata, K.; Pearson, C. P.; Goto, T.; Kim, S. J.; Matsuhara, H.; Wada, T.
2014-11-01
We present herein galaxy number counts of the nine bands in the 2-24 μm range on the basis of the AKARI North Ecliptic Pole (NEP) surveys. The number counts are derived from NEP-deep and NEP-wide surveys, which cover areas of 0.5 and 5.8 deg2, respectively. To produce reliable number counts, the sources were extracted from recently updated images. Completeness and difference between observed and intrinsic magnitudes were corrected by Monte Carlo simulation. Stellar counts were subtracted by using the stellar fraction estimated from optical data. The resultant source counts are given down to the 80 per cent completeness limit; 0.18, 0.16, 0.10, 0.05, 0.06, 0.10, 0.15, 0.16 and 0.44 mJy in the 2.4, 3.2, 4.1, 7, 9, 11, 15, 18 and 24 μm bands, respectively. On the bright side of all bands, the count distribution is flat, consistent with the Euclidean universe, while on the faint side, the counts deviate, suggesting that the galaxy population of the distant universe is evolving. These results are generally consistent with previous galaxy counts in similar wavebands. We also compare our counts with evolutionary models and find them in good agreement. By integrating the models down to the 80 per cent completeness limits, we calculate that the AKARI NEP survey revolves 20-50 per cent of the cosmic infrared background, depending on the wavebands.
NASA Astrophysics Data System (ADS)
Rui, Zhenhua
This study analyzes historical cost data of 412 pipelines and 220 compressor stations. On the basis of this analysis, the study also evaluates the feasibility of an Alaska in-state gas pipeline using Monte Carlo simulation techniques. Analysis of pipeline construction costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary by diameter, length, volume, year, and location. Overall average learning rates for pipeline material and labor costs are 6.1% and 12.4%, respectively. Overall average cost shares for pipeline material, labor, miscellaneous, and right of way (ROW) are 31%, 40%, 23%, and 7%, respectively. Regression models are developed to estimate pipeline component costs for different lengths, cross-sectional areas, and locations. An analysis of inaccuracy in pipeline cost estimation demonstrates that the cost estimation of pipeline cost components is biased except for in the case of total costs. Overall overrun rates for pipeline material, labor, miscellaneous, ROW, and total costs are 4.9%, 22.4%, -0.9%, 9.1%, and 6.5%, respectively, and project size, capacity, diameter, location, and year of completion have different degrees of impacts on cost overruns of pipeline cost components. Analysis of compressor station costs shows that component costs, shares of cost components, and learning rates for material and labor costs vary in terms of capacity, year, and location. Average learning rates for compressor station material and labor costs are 12.1% and 7.48%, respectively. Overall average cost shares of material, labor, miscellaneous, and ROW are 50.6%, 27.2%, 21.5%, and 0.8%, respectively. Regression models are developed to estimate compressor station component costs in different capacities and locations. An investigation into inaccuracies in compressor station cost estimation demonstrates that the cost estimation for compressor stations is biased except for in the case of material costs. Overall average overrun rates for compressor station material, labor, miscellaneous, land, and total costs are 3%, 60%, 2%, -14%, and 11%, respectively, and cost overruns for cost components are influenced by location and year of completion to different degrees. Monte Carlo models are developed and simulated to evaluate the feasibility of an Alaska in-state gas pipeline by assigning triangular distribution of the values of economic parameters. Simulated results show that the construction of an Alaska in-state natural gas pipeline is feasible at three scenarios: 500 million cubic feet per day (mmcfd), 750 mmcfd, and 1000 mmcfd.
NASA Astrophysics Data System (ADS)
Pleniou, Magdalini; Koutsias, Nikos
2013-05-01
The aim of our study was to explore the spectral properties of fire-scorched (burned) and non fire-scorched (vegetation) areas, as well as areas with different burn/vegetation ratios, using a multisource multiresolution satellite data set. A case study was undertaken following a very destructive wildfire that occurred in Parnitha, Greece, July 2007, for which we acquired satellite images from LANDSAT, ASTER, and IKONOS. Additionally, we created spatially degraded satellite data over a range of coarser resolutions using resampling techniques. The panchromatic (1 m) and multispectral component (4 m) of IKONOS were merged using the Gram-Schmidt spectral sharpening method. This very high-resolution imagery served as the basis to estimate the cover percentage of burned areas, bare land and vegetation at pixel level, by applying the maximum likelihood classification algorithm. Finally, multiple linear regression models were fit to estimate each land-cover fraction as a function of surface reflectance values of the original and the spatially degraded satellite images. The main findings of our research were: (a) the Near Infrared (NIR) and Short-wave Infrared (SWIR) are the most important channels to estimate the percentage of burned area, whereas the NIR and red channels are the most important to estimate the percentage of vegetation in fire-affected areas; (b) when the bi-spectral space consists only of NIR and SWIR, then the NIR ground reflectance value plays a more significant role in estimating the percent of burned areas, and the SWIR appears to be more important in estimating the percent of vegetation; and (c) semi-burned areas comprising 45-55% burned area and 45-55% vegetation are spectrally closer to burned areas in the NIR channel, whereas those areas are spectrally closer to vegetation in the SWIR channel. These findings, at least partially, are attributed to the fact that: (i) completely burned pixels present low variance in the NIR and high variance in the SWIR, whereas the opposite is observed in completely vegetated areas where higher variance is observed in the NIR and lower variance in the SWIR, and (ii) bare land modifies the spectral signal of burned areas more than the spectral signal of vegetated areas in the NIR, while the opposite is observed in SWIR region of the spectrum where the bare land modifies the spectral signal of vegetation more than the burned areas because the bare land and the vegetation are spectrally more similar in the NIR, and the bare land and burned areas are spectrally more similar in the SWIR.
Using radial NMR profiles to characterize pore size distributions
NASA Astrophysics Data System (ADS)
Deriche, Rachid; Treilhard, John
2012-02-01
Extracting information about axon diameter distributions in the brain is a challenging task which provides useful information for medical purposes; for example, the ability to characterize and monitor axon diameters would be useful in diagnosing and investigating diseases like amyotrophic lateral sclerosis (ALS)1 or autism.2 Three families of operators are defined by Ozarslan,3 whose action upon an NMR attenuation signal extracts the moments of the pore size distribution of the ensemble under consideration; also a numerical method is proposed to continuously reconstruct a discretely sampled attenuation profile using the eigenfunctions of the simple harmonic oscillator Hamiltonian: the SHORE basis. The work presented here extends Ozarlan's method to other bases that can offer a better description of attenuation signal behaviour; in particular, we propose the use of the radial Spherical Polar Fourier (SPF) basis. Testing is performed to contrast the efficacy of the radial SPF basis and SHORE basis in practical attenuation signal reconstruction. The robustness of the method to additive noise is tested and analysed. We demonstrate that a low-order attenuation signal reconstruction outperforms a higher-order reconstruction in subsequent moment estimation under noisy conditions. We propose the simulated annealing algorithm for basis function scale parameter estimation. Finally, analytic expressions are derived and presented for the action of the operators on the radial SPF basis (obviating the need for numerical integration, thus avoiding a spectrum of possible sources of error).
UAV Control on the Basis of 3D Landmark Bearing-Only Observations
Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry
2015-01-01
The article presents an approach to the control of a UAV on the basis of 3D landmark observations. The novelty of the work is the usage of the 3D RANSAC algorithm developed on the basis of the landmarks’ position prediction with the aid of a modified Kalman-type filter. Modification of the filter based on the pseudo-measurements approach permits obtaining unbiased UAV position estimation with quadratic error characteristics. Modeling of UAV flight on the basis of the suggested algorithm shows good performance, even under significant external perturbations. PMID:26633394
NASA Astrophysics Data System (ADS)
Witte, Jonathon; Neaton, Jeffrey B.; Head-Gordon, Martin
2017-06-01
With the aim of mitigating the basis set error in density functional theory (DFT) calculations employing local basis sets, we herein develop two empirical corrections for basis set superposition error (BSSE) in the def2-SVPD basis, a basis which—when stripped of BSSE—is capable of providing near-complete-basis DFT results for non-covalent interactions. Specifically, we adapt the existing pairwise geometrical counterpoise (gCP) approach to the def2-SVPD basis, and we develop a beyond-pairwise approach, DFT-C, which we parameterize across a small set of intermolecular interactions. Both gCP and DFT-C are evaluated against the traditional Boys-Bernardi counterpoise correction across a set of 3402 non-covalent binding energies and isomerization energies. We find that the DFT-C method represents a significant improvement over gCP, particularly for non-covalently-interacting molecular clusters. Moreover, DFT-C is transferable among density functionals and can be combined with existing functionals—such as B97M-V—to recover large-basis results at a fraction of the cost.
Carbon sequestration potential of second-growth forest regeneration in the Latin American tropics
Chazdon, Robin L.; Broadbent, Eben N.; Rozendaal, Danaë M. A.; Bongers, Frans; Zambrano, Angélica María Almeyda; Aide, T. Mitchell; Balvanera, Patricia; Becknell, Justin M.; Boukili, Vanessa; Brancalion, Pedro H. S.; Craven, Dylan; Almeida-Cortez, Jarcilene S.; Cabral, George A. L.; de Jong, Ben; Denslow, Julie S.; Dent, Daisy H.; DeWalt, Saara J.; Dupuy, Juan M.; Durán, Sandra M.; Espírito-Santo, Mario M.; Fandino, María C.; César, Ricardo G.; Hall, Jefferson S.; Hernández-Stefanoni, José Luis; Jakovac, Catarina C.; Junqueira, André B.; Kennard, Deborah; Letcher, Susan G.; Lohbeck, Madelon; Martínez-Ramos, Miguel; Massoca, Paulo; Meave, Jorge A.; Mesquita, Rita; Mora, Francisco; Muñoz, Rodrigo; Muscarella, Robert; Nunes, Yule R. F.; Ochoa-Gaona, Susana; Orihuela-Belmonte, Edith; Peña-Claros, Marielos; Pérez-García, Eduardo A.; Piotto, Daniel; Powers, Jennifer S.; Rodríguez-Velazquez, Jorge; Romero-Pérez, Isabel Eunice; Ruíz, Jorge; Saldarriaga, Juan G.; Sanchez-Azofeifa, Arturo; Schwartz, Naomi B.; Steininger, Marc K.; Swenson, Nathan G.; Uriarte, Maria; van Breugel, Michiel; van der Wal, Hans; Veloso, Maria D. M.; Vester, Hans; Vieira, Ima Celia G.; Bentos, Tony Vizcarra; Williamson, G. Bruce; Poorter, Lourens
2016-01-01
Regrowth of tropical secondary forests following complete or nearly complete removal of forest vegetation actively stores carbon in aboveground biomass, partially counterbalancing carbon emissions from deforestation, forest degradation, burning of fossil fuels, and other anthropogenic sources. We estimate the age and spatial extent of lowland second-growth forests in the Latin American tropics and model their potential aboveground carbon accumulation over four decades. Our model shows that, in 2008, second-growth forests (1 to 60 years old) covered 2.4 million km2 of land (28.1% of the total study area). Over 40 years, these lands can potentially accumulate a total aboveground carbon stock of 8.48 Pg C (petagrams of carbon) in aboveground biomass via low-cost natural regeneration or assisted regeneration, corresponding to a total CO2 sequestration of 31.09 Pg CO2. This total is equivalent to carbon emissions from fossil fuel use and industrial processes in all of Latin America and the Caribbean from 1993 to 2014. Ten countries account for 95% of this carbon storage potential, led by Brazil, Colombia, Mexico, and Venezuela. We model future land-use scenarios to guide national carbon mitigation policies. Permitting natural regeneration on 40% of lowland pastures potentially stores an additional 2.0 Pg C over 40 years. Our study provides information and maps to guide national-level forest-based carbon mitigation plans on the basis of estimated rates of natural regeneration and pasture abandonment. Coupled with avoided deforestation and sustainable forest management, natural regeneration of second-growth forests provides a low-cost mechanism that yields a high carbon sequestration potential with multiple benefits for biodiversity and ecosystem services. PMID:27386528
Carbon sequestration potential of second-growth forest regeneration in the Latin American tropics.
Chazdon, Robin L; Broadbent, Eben N; Rozendaal, Danaë M A; Bongers, Frans; Zambrano, Angélica María Almeyda; Aide, T Mitchell; Balvanera, Patricia; Becknell, Justin M; Boukili, Vanessa; Brancalion, Pedro H S; Craven, Dylan; Almeida-Cortez, Jarcilene S; Cabral, George A L; de Jong, Ben; Denslow, Julie S; Dent, Daisy H; DeWalt, Saara J; Dupuy, Juan M; Durán, Sandra M; Espírito-Santo, Mario M; Fandino, María C; César, Ricardo G; Hall, Jefferson S; Hernández-Stefanoni, José Luis; Jakovac, Catarina C; Junqueira, André B; Kennard, Deborah; Letcher, Susan G; Lohbeck, Madelon; Martínez-Ramos, Miguel; Massoca, Paulo; Meave, Jorge A; Mesquita, Rita; Mora, Francisco; Muñoz, Rodrigo; Muscarella, Robert; Nunes, Yule R F; Ochoa-Gaona, Susana; Orihuela-Belmonte, Edith; Peña-Claros, Marielos; Pérez-García, Eduardo A; Piotto, Daniel; Powers, Jennifer S; Rodríguez-Velazquez, Jorge; Romero-Pérez, Isabel Eunice; Ruíz, Jorge; Saldarriaga, Juan G; Sanchez-Azofeifa, Arturo; Schwartz, Naomi B; Steininger, Marc K; Swenson, Nathan G; Uriarte, Maria; van Breugel, Michiel; van der Wal, Hans; Veloso, Maria D M; Vester, Hans; Vieira, Ima Celia G; Bentos, Tony Vizcarra; Williamson, G Bruce; Poorter, Lourens
2016-05-01
Regrowth of tropical secondary forests following complete or nearly complete removal of forest vegetation actively stores carbon in aboveground biomass, partially counterbalancing carbon emissions from deforestation, forest degradation, burning of fossil fuels, and other anthropogenic sources. We estimate the age and spatial extent of lowland second-growth forests in the Latin American tropics and model their potential aboveground carbon accumulation over four decades. Our model shows that, in 2008, second-growth forests (1 to 60 years old) covered 2.4 million km(2) of land (28.1% of the total study area). Over 40 years, these lands can potentially accumulate a total aboveground carbon stock of 8.48 Pg C (petagrams of carbon) in aboveground biomass via low-cost natural regeneration or assisted regeneration, corresponding to a total CO2 sequestration of 31.09 Pg CO2. This total is equivalent to carbon emissions from fossil fuel use and industrial processes in all of Latin America and the Caribbean from 1993 to 2014. Ten countries account for 95% of this carbon storage potential, led by Brazil, Colombia, Mexico, and Venezuela. We model future land-use scenarios to guide national carbon mitigation policies. Permitting natural regeneration on 40% of lowland pastures potentially stores an additional 2.0 Pg C over 40 years. Our study provides information and maps to guide national-level forest-based carbon mitigation plans on the basis of estimated rates of natural regeneration and pasture abandonment. Coupled with avoided deforestation and sustainable forest management, natural regeneration of second-growth forests provides a low-cost mechanism that yields a high carbon sequestration potential with multiple benefits for biodiversity and ecosystem services.
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR® energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for allmore » equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.« less
1983-03-09
that maximize electromagnetic compatibility potential. -- Providing direct assistance on an reimbursable basis to DOD and other Government agencies on...value, we estimated that reimburs - able real estate expenses would average about $6,458 rather than $4,260 included in the Air Force estimate. When the...of estimated reimbursement was assumed to be necessary to encourage the relocation of more professional employees and increase their estimated
NASA Astrophysics Data System (ADS)
Elansky, N.; Postylyakov, O.; Verevkin, Y.; Volobuev, L.; Ponomarev, N.
2017-11-01
By the present a large amount of data has been accumulated on direct measurements of the pollution and thermodynamic state of the atmosphere in the Moscow region, which was obtained at stations of Roshydromet, Mosecomonitoring, A.M.Obukhov Institute of Atmospheric Physics (OIAP), M.V. Lomonosov Moscow State University, NPO Typhoon, what allows estimating pollution emissions based on measurements and correcting existing emission inventories, which are evaluated mainly on indirect data connected with population density, fuel consumption, etc. Within the framework of the project, the whole volume of data on the concentration of ground contaminants CO, NOx, SO2, CH4, obtained at regularly operated Moscow Ecological Monitoring stations and at OIAP stations from 2005 to 2014, was systematized. Observation data on pollution concentrations are supplemented by measurements of their integral content in the atmospheric boundary layer, obtained by differential spectroscopy methods (MAX DOAS, ZDOAS) at stationary stations and by passing Moscow with DOAS-equipped car. The paper present preliminary estimates of pollution emissions in the Moscow region, obtained on the basis of the collected array of experimental data. The estimations of pollutant emissions from Moscow were obtained experimentally in a few ways: (1) on the basis of network observations of surface concentrations, (2) on the basis of measurements in the atmospheric layer 0-348 m at Ostankino TV tower, (3) on the basis of the integral pollutant (NO2) content in ABL obtained by DOAS technique from stationary stations, and (4) using a car with DOAS equipment traveling over the closed route around Moscow (for NO2). All experimental approaches yielded close values of pollution emissions for Moscow. Trends in emissions of CO, NOx, and CH4 are negative, and the trend of SO2 emission is positive from 2005 to 2014.
Solutions to Some Nonlinear Equations from Nonmetric Data.
ERIC Educational Resources Information Center
Rule, Stanley J.
1979-01-01
A method to provide estimates of parameters of specified nonlinear equations from ordinal data generated from a crossed design is presented. The statistical basis for the method, called NOPE (nonmetric parameter estimation), as well as examples using artifical data, are presented. (Author/JKS)
Field validation of speed estimation techniques for air quality conformity analysis.
DOT National Transportation Integrated Search
2004-01-01
The air quality conformity analysis process requires the estimation of speeds for a horizon year on a link-by-link basis where only a few future roadway characteristics, such as forecast volume and capacity, are known. Accordingly, the Virginia Depar...
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
Dictionary Approaches to Image Compression and Reconstruction
NASA Technical Reports Server (NTRS)
Ziyad, Nigel A.; Gilmore, Erwin T.; Chouikha, Mohamed F.
1998-01-01
This paper proposes using a collection of parameterized waveforms, known as a dictionary, for the purpose of medical image compression. These waveforms, denoted as lambda, are discrete time signals, where y represents the dictionary index. A dictionary with a collection of these waveforms Is typically complete or over complete. Given such a dictionary, the goal is to obtain a representation Image based on the dictionary. We examine the effectiveness of applying Basis Pursuit (BP), Best Orthogonal Basis (BOB), Matching Pursuits (MP), and the Method of Frames (MOF) methods for the compression of digitized radiological images with a wavelet-packet dictionary. The performance of these algorithms is studied for medical images with and without additive noise.
Duchêne, Sebastián; Archer, Frederick I.; Vilstrup, Julia; Caballero, Susana; Morin, Phillip A.
2011-01-01
The availability of mitochondrial genome sequences is growing as a result of recent technological advances in molecular biology. In phylogenetic analyses, the complete mitogenome is increasingly becoming the marker of choice, usually providing better phylogenetic resolution and precision relative to traditional markers such as cytochrome b (CYTB) and the control region (CR). In some cases, the differences in phylogenetic estimates between mitogenomic and single-gene markers have yielded incongruent conclusions. By comparing phylogenetic estimates made from different genes, we identified the most informative mitochondrial regions and evaluated the minimum amount of data necessary to reproduce the same results as the mitogenome. We compared results among individual genes and the mitogenome for recently published complete mitogenome datasets of selected delphinids (Delphinidae) and killer whales (genus Orcinus). Using Bayesian phylogenetic methods, we investigated differences in estimation of topologies, divergence dates, and clock-like behavior among genes for both datasets. Although the most informative regions were not the same for each taxonomic group (COX1, CYTB, ND3 and ATP6 for Orcinus, and ND1, COX1 and ND4 for Delphinidae), in both cases they were equivalent to less than a quarter of the complete mitogenome. This suggests that gene information content can vary among groups, but can be adequately represented by a portion of the complete sequence. Although our results indicate that complete mitogenomes provide the highest phylogenetic resolution and most precise date estimates, a minimum amount of data can be selected using our approach when the complete sequence is unavailable. Studies based on single genes can benefit from the addition of a few more mitochondrial markers, producing topologies and date estimates similar to those obtained using the entire mitogenome. PMID:22073275
Michael T. Hobbins; Jorge A. Ramirez; Thomas C. Brown
2001-01-01
Long-term monthly evapotranspiration estimates from Brutsaert and Strickerâs Advection-Aridity model were compared with independent estimates of evapotranspiration derived from long-term water balances for 139 undisturbed basins across the conterminous United States. On an average annual basis for the period 1962-1988 the original model, which uses a Penman wind...
Strategies for Revising Judgment: How (and How Well) People Use Others' Opinions
ERIC Educational Resources Information Center
Soll, Jack B.; Larrick, Richard P.
2009-01-01
A basic issue in social influence is how best to change one's judgment in response to learning the opinions of others. This article examines the strategies that people use to revise their quantitative estimates on the basis of the estimates of another person. The authors note that people tend to use 2 basic strategies when revising estimates:…
Cost estimators for construction of forest roads in the central Appalachians
Deborah, A. Layton; Chris O. LeDoux; Curt C. Hassler; Curt C. Hassler
1992-01-01
Regression equations were developed for estimating the total cost of road construction in the central Appalachian region. Estimators include methods for predicting total costs for roads constructed using hourly rental methods and roads built on a total-job bid basis. Results show that total-job bid roads cost up to five times as much as roads built than when equipment...
Tewari, Krishna C.; Foster, Edward P.
1985-01-01
Coal solids (SRC) and distillate oils are combined to afford single-phase blends of residual oils which have utility as fuel oils substitutes. The components are combined on the basis of their respective polarities, that is, on the basis of their heteroatom content, to assure complete solubilization of SRC. The resulting composition is a fuel oil blend which retains its stability and homogeneity over the long term.
Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges
NASA Astrophysics Data System (ADS)
Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.
2017-06-01
We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.
Inter and intra-population variation in shoaling and boldness in the zebrafish (Danio rerio).
Wright, Dominic; Rimmer, Lucy B; Pritchard, Victoria L; Krause, Jens; Butlin, Roger K
2003-08-01
Population differences in anti-predator behaviour have been demonstrated in several species, although less is known about the genetic basis of these traits. To determine the extent of genetic differences in boldness (defined as exploration of a novel object) and shoaling within and between zebrafish (Danio rerio) populations, and to examine the genetic basis of shoaling behaviour in general, we carried out a study that involved laboratory-raised fish derived from four wild-caught populations. Controlling for differences in rearing environment, significant inter-population differences were found in boldness but not shoaling. A larger shoaling experiment was also performed using one of the populations as the basis of a North Carolina type II breeding design (174 fish in total) to estimate heritability of shoaling tendency. A narrow-sense heritability estimate of 0.40 was obtained, with no apparent dominance effects.
Size and performance of anoxic limestone drains to neutralize acdic mine drainagei
Cravotta, C.A.
2003-01-01
Acidic mine drainage (AMD) can be neutralized effectively in underground, anoxic limestone drains (ALDs). Owing to reaction between the AMD and limestone (CaCO3), the pH and concentrations of alkalinity and calcium increase asymptotically with detention time in the ALD, while concentrations of sulfate, ferrous iron, and manganese typically are unaffected. This paper introduces a method to predict the alkalinity produced within an ALD and to estimate the mass of limestone required for its construction on the basis of data from short-term, closed-container (cubitainer) tests. The cubitainer tests, which used an initial mass of 4 kg crushed limestone completely inundated with 2.8 L AMD, were conducted for 11 to 16 d and provided estimates for the initial and maximum alkalinities and corresponding rates of alkalinity production and limestone dissolution. Long-term (5-11 yr) data for alkalinity and CaCO3 flux at the Howe Bridge, Morrison, and Buck Mountain ALDs in Pennsylvania, USA, indicate that rates of alkalinity production and limestone dissolution under field conditions were comparable with those in cubitainers filled with limestone and AMD from each site. The alkalinity of effluent and intermediate samples along the flow path through the ALDs and long-term trends in the residual mass of limestone and the effluent alkalinity were estimated as a function of the computed detention time within the ALD and second-order dissolution rate models for cubitainer tests. Thus, cubitainer tests can be a useful tool for designing ALDs and predicting their performance.
Environmental benefits vs. costs of geologic mapping
Bhagwat, S.B.; Berg, R.C.
1992-01-01
Boone and Winnebago Counties, Illinois, U.S.A., were selected for this study, required by the Illinois State Senate, because mapping and environmental interpretations were completed there in 1981. Costs of geologic mapping in these counties in 1990 dollars were $290,000. Two estimates of costs of statewide mapping were made, one extrapolated from Boone and Winnebago Counties ($21 million), the other estimated on the basis of differences between the Boone/Winnebago program and proposed mapping program for the State of Illinois ($55 million). Benefits of geologic information come in the form of future avoided costs for environmental cleanup. Only the quantifiable data, available from a few sites, were included. Data collection, based on 55 personal interviews in Boone and Winnebago Counties, were grouped into four cumulative categories with increasing variability. Geologic maps alone cannot account for all avoided costs of future cleanup. Therefore, estimated benefits were reduced by 50, 75, and 90 percent in three scenarios. To account for delays in proper utilization of knowledge gained from a mapping program, a 10-yr delay in benefit realization was assumed. All benefits were converted to 1990 dollars. In benefit category 4, benefit-cost ratios for Boone/Winnebago Counties ranged between 5 and 55. Statewide projection of benefits was based on county areas and an aquifer contamination potential score for each county. Statewide benefit-cost ratio in benefit category 4 ranged from 1.2 to 14 ($21 million mapping costs) and from 0.5 to 5.4 ($55 million mapping costs). ?? 1992 Springer-Verlag New York Inc.
Patients differ in their ability to self-monitor adherence to a low-sodium diet versus medication.
Chung, Misook L; Lennie, Terry A; de Jong, Marla; Wu, Jia-Rong; Riegel, Barbara; Moser, Debra K
2008-03-01
Poor adherence to a low-sodium diet (LSD) and prescribed medications increases rehospitalization risk in patients with heart failure (HF). Clinicians have difficulty assessing adherence objectively, so they depend on patients' self-report. The degree to which self-reported adherence reflects actual adherence is unclear. We examined patients' ability to self-monitor adherence to an LSD and medications by comparing self-reported adherence with objective evidence of adherence. A total of 133 patients with HF (male 71%; ejection fraction 35% +/- 14%) completed the Medical Outcomes Study Specific Adherence Scale. Adherence to the LSD and medication were assessed objectively using 24-hour urinary sodium excretion and dose counting with an electronic monitoring device, respectively. On the basis of self-report, patients were divided into adherent and non-adherent groups and evaluated for differences according to objective adherence. There were no differences in urinary sodium levels between the self-reported LSD adherent and non-adherent groups (4560 mg vs. 4333 mg; P = .59). Self-reported adherent and non-adherent medication groups took 92.4% and 80.4% of prescribed doses, respectively (P < .001). Patients were able to accurately estimate adherence to medication, but they failed to estimate LSD adherence. This finding suggests that we need to improve our means of evaluating adherence to the LSD and of educating patients more thoroughly about following the LSD. We speculated that the inability to estimate LSD adherence may be the result of gaps in patients' knowledge that preclude accurate self-assessment.
Sellers, Benjamin D; James, Natalie C; Gobbi, Alberto
2017-06-26
Reducing internal strain energy in small molecules is critical for designing potent drugs. Quantum mechanical (QM) and molecular mechanical (MM) methods are often used to estimate these energies. In an effort to determine which methods offer an optimal balance in accuracy and performance, we have carried out torsion scan analyses on 62 fragments. We compared nine QM and four MM methods to reference energies calculated at a higher level of theory: CCSD(T)/CBS single point energies (coupled cluster with single, double, and perturbative triple excitations at the complete basis set limit) calculated on optimized geometries using MP2/6-311+G**. The results show that both the more recent MP2.X perturbation method as well as MP2/CBS perform quite well. In addition, combining a Hartree-Fock geometry optimization with a MP2/CBS single point energy calculation offers a fast and accurate compromise when dispersion is not a key energy component. Among MM methods, the OPLS3 force field accurately reproduces CCSD(T)/CBS torsion energies on more test cases than the MMFF94s or Amber12:EHT force fields, which struggle with aryl-amide and aryl-aryl torsions. Using experimental conformations from the Cambridge Structural Database, we highlight three example structures for which OPLS3 significantly overestimates the strain. The energies and conformations presented should enable scientists to estimate the expected error for the methods described and we hope will spur further research into QM and MM methods.
Herbert A. Knight; Joe P. McClure
1968-01-01
This report represents the principal findings of the fourth Forest Survey of South Carolina's timber resource. The survey was started in August 1966 and completed in July 1968. Findings of the three previous surveys, completed in 1936, 1947, and 1958, provide the basis for measuring changes that have occurred and trends that have developed over the past 32 years....
USDA-ARS?s Scientific Manuscript database
Tung tree (Vernicia fordii) is an economically important plant widely cultivated for industrial oil production in China. To better understand the molecular basis of tung tree chloroplasts, we sequenced and characterized the complete chloroplast genome. The chloroplast genome was 161,524 bp in length...
Forest statistics for the Piedmont of South Carolina
William H.B. Haines
1967-01-01
This report presents the principal findings of the fourth Forest Survey in the Piedmont of South Carolina, completed in February 1967. Findings of the three earlier surveys, completed in 1936, 1947, and 1958, provide the basis for measuring the changes that have occurred and the trends that have developed over the past 30 years.
Herbert A. Knight; Joe P. McClure
1966-01-01
This report presents the principal findings of the third Forest survey of Virginia's timber resource. The resurvey was started in November 1964 and completed in August 1966. Findings of the two previous surveys, completed in 1940 and 1957, provide the basis for measuring the changes that have been occurred and the trends that have developed during the past 26...
Herbert A. Knight; Joe P. McClure
1970-01-01
This report presents the principal findings of the fourth Forest Survey of Florida's timber resource, The survey was started in July 1968 and completed in June 1970. Findings of the three previous surveys, completed in 1936, 1949, and 1959, provide the basis for measuring changes that have occurred and trends that have developed over the past 34 years. In this...
Psychiatric Comorbidity in Learning Disorder: Analysis of Family Variables
ERIC Educational Resources Information Center
Capozzi, Flavia; Casini, Maria Pia; Romani, Maria; De Gennaro, Luigi; Nicolais, Giampaolo; Solano, Luigi
2008-01-01
Objective: This study aimed to evaluate the role of parental relational styles on the development of psychopathological disturbances in children with Learning Disability (LD). Method: Fifty-six children aged 7-12 diagnosed with LD were evaluated on the basis of the Children Behaviour Check List (CBCL) completed by parents. Parents completed an…
Wang, Qing; Wang, Jiaonan; He, Mike Z; Kinney, Patrick L; Li, Tiantian
2018-01-01
Ambient fine particulate matter (PM 2.5 ) pollution is currently a serious environmental problem in China, but evidence of health effects with higher resolution and spatial coverage is insufficient. This study aims to provide a better overall understanding of long-term mortality effects of PM 2.5 pollution in China and a county-level spatial map for estimating PM 2.5 related premature deaths of the entire country. Using four sets of satellite-derived PM 2.5 concentration data and the integrated exposure-response model which has been employed by the Global Burden of Disease (GBD) to estimate global mortality of ambient and household air pollution in 2010, we estimated PM 2.5 related premature mortality for five endpoints across China in 2010. Premature deaths attributed to PM 2.5 nationwide amounted to 1.27million in total, and 119,167, 83,976, 390,266, 670,906 for adult chronic obstructive pulmonary disease, lung cancer, ischemic heart disease, and stroke, respectively; 3995 deaths for acute lower respiratory infections were estimated in children under the age of 5. About half of the premature deaths were from counties with annual average PM 2.5 concentrations above 63.61μg/m 3 , which cover 16.97% of the Chinese territory. These counties were largely located in the Beijing-Tianjin-Hebei region and the North China Plain. High population density and high pollution areas exhibited the highest health risks attributed to air pollution. On a per capita basis, the highest values were mostly located in heavily polluted industrial regions. PM 2.5 -attributable health risk is closely associated with high population density and high levels of pollution in China. Further estimates using long-term historical exposure data and concentration-response (C-R) relationships should be completed in the future to investigate longer-term trends in the effects of PM 2.5 . Copyright © 2017 Elsevier Ltd. All rights reserved.
The hydrolysis of proteins by microwave energy
Margolis, Sam A.; Jassie, Lois; Kingston, H. M.
1991-01-01
Microwave energy, at manually-adjusted, partial power settings has been used to hydrolyse bovine serum albumin at 125 °C. Hydrolysis was complete within 2 h, except for valine and isoleucine which were completely liberated within 4 h. The aminoacid destruction was less than that observed at similar hydrolysis conditions with other methods and complete hydrolysis was achieved more rapidly. These results provide a basis for automating the process of amino-acid hydrolysis. PMID:18924889
Size Estimation in Schizophrenic and Nonschizophrenic Subjects
ERIC Educational Resources Information Center
Kopfstein, Joan Held; Neale, John M.
1971-01-01
The results of this study showed no significant differences in the size estimation levels of acute and chronic schizophrenic and nonschizophrenic psychiatric patients. Also, there were no significant differences when these groups were subdivided on the basis of both premorbid adjustment and paranoid status. (Author/CG)
Fast function-on-scalar regression with penalized basis expansions.
Reiss, Philip T; Huang, Lei; Mennes, Maarten
2010-01-01
Regression models for functional responses and scalar predictors are often fitted by means of basis functions, with quadratic roughness penalties applied to avoid overfitting. The fitting approach described by Ramsay and Silverman in the 1990 s amounts to a penalized ordinary least squares (P-OLS) estimator of the coefficient functions. We recast this estimator as a generalized ridge regression estimator, and present a penalized generalized least squares (P-GLS) alternative. We describe algorithms by which both estimators can be implemented, with automatic selection of optimal smoothing parameters, in a more computationally efficient manner than has heretofore been available. We discuss pointwise confidence intervals for the coefficient functions, simultaneous inference by permutation tests, and model selection, including a novel notion of pointwise model selection. P-OLS and P-GLS are compared in a simulation study. Our methods are illustrated with an analysis of age effects in a functional magnetic resonance imaging data set, as well as a reanalysis of a now-classic Canadian weather data set. An R package implementing the methods is publicly available.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Evaluation of the metabolic rate based on the recording of the heart rate.
Malchaire, Jacques; d'AMBROSIO Alfano, Francesca Romana; Palella, Boris Igor
2017-06-08
The assessment of harsh working conditions requires a correct evaluation of the metabolic rate. This paper revises the basis described in the ISO 8996 standard for the evaluation of the metabolic rate at a work station from the recording of the heart rate of a worker during a representative period of time. From a review of the literature, formulas different from those given in the standard are proposed to estimate the maximum working capacity, the maximum heart rate, the heart rate and the metabolic rate at rest and the relation (HR vs. M) at the basis of the estimation of the equivalent metabolic rate, as a function of the age, height and weight of the person. A Monte Carlo simulation is used to determine, from the approximations of these parameters and formulas, the imprecision of the estimated equivalent metabolic rate. The results show that the standard deviation of this estimate varies from 10 to 15%.
Evaluation of the metabolic rate based on the recording of the heart rate
MALCHAIRE, Jacques; ALFANO, Francesca Romana d’AMBROSIO; PALELLA, Boris Igor
2017-01-01
The assessment of harsh working conditions requires a correct evaluation of the metabolic rate. This paper revises the basis described in the ISO 8996 standard for the evaluation of the metabolic rate at a work station from the recording of the heart rate of a worker during a representative period of time. From a review of the literature, formulas different from those given in the standard are proposed to estimate the maximum working capacity, the maximum heart rate, the heart rate and the metabolic rate at rest and the relation (HR vs. M) at the basis of the estimation of the equivalent metabolic rate, as a function of the age, height and weight of the person. A Monte Carlo simulation is used to determine, from the approximations of these parameters and formulas, the imprecision of the estimated equivalent metabolic rate. The results show that the standard deviation of this estimate varies from 10 to 15%. PMID:28250334
Jankowska, Marzena; Kupka, Teobald; Stobiński, Leszek; Faber, Rasmus; Lacerda, Evanildo G; Sauer, Stephan P A
2016-02-05
Hartree-Fock and density functional theory with the hybrid B3LYP and general gradient KT2 exchange-correlation functionals were used for nonrelativistic and relativistic nuclear magnetic shielding calculations of helium, neon, argon, krypton, and xenon dimers and free atoms. Relativistic corrections were calculated with the scalar and spin-orbit zeroth-order regular approximation Hamiltonian in combination with the large Slater-type basis set QZ4P as well as with the four-component Dirac-Coulomb Hamiltonian using Dyall's acv4z basis sets. The relativistic corrections to the nuclear magnetic shieldings and chemical shifts are combined with nonrelativistic coupled cluster singles and doubles with noniterative triple excitations [CCSD(T)] calculations using the very large polarization-consistent basis sets aug-pcSseg-4 for He, Ne and Ar, aug-pcSseg-3 for Kr, and the AQZP basis set for Xe. For the dimers also, zero-point vibrational (ZPV) corrections are obtained at the CCSD(T) level with the same basis sets were added. Best estimates of the dimer chemical shifts are generated from these nuclear magnetic shieldings and the relative importance of electron correlation, ZPV, and relativistic corrections for the shieldings and chemical shifts is analyzed. © 2015 Wiley Periodicals, Inc.
Stinchcomb, A L
2013-01-01
Annette Bunge and her research group have had the central theme of mathematically modeling the dermal absorption process. Most of the research focus has been on estimating dermal absorption for the purpose of risk assessment, for exposure scenarios in the environment and in the occupational setting. Her work is the basis for the United States Environmental Protection Agency's estimations for dermal absorption from contaminated water. It is also the basis of the dermal absorption estimates used in determining if chemicals should be assigned a 'skin notation' for potential systemic toxicity following occupational skin exposure. The work is truly translational in that it started with mathematical theory, is validated with preclinical and human experiments, and then is used in guidelines to protect human health. Her valued research has also extended into the topical drug bioavailability and bioequivalence assessment field.
NASA Astrophysics Data System (ADS)
Matias, J.; Mescia, F.; Ramon, M.; Virto, J.
2012-04-01
We present a complete and optimal set of observables for the exclusive 4-body overline B meson decay {overline B_d} to {overline {text{K}}^{{*0}}} (→ Kπ) ℓ + ℓ -in the low dilepton mass region, that contains a maximal number of clean observables. This basis of observables is built in a systematic way. We show that all the previously defined observables and any observable that one can construct, can be expressed as a function of this basis. This set of observables contains all the information that can be extracted from the angular distribution in the cleanest possible way. We provide explicit expressions for the full and the uniangular distributions in terms of this basis. The conclusions presented here can be easily extended to the large- q 2 region. We study the sensitivity of the observables to right-handed currents and scalars. Finally, we present for the first time all the symmetries of the full distribution including massive terms and scalar contributions.
Functional Parallel Factor Analysis for Functions of One- and Two-dimensional Arguments.
Choi, Ji Yeh; Hwang, Heungsun; Timmerman, Marieke E
2018-03-01
Parallel factor analysis (PARAFAC) is a useful multivariate method for decomposing three-way data that consist of three different types of entities simultaneously. This method estimates trilinear components, each of which is a low-dimensional representation of a set of entities, often called a mode, to explain the maximum variance of the data. Functional PARAFAC permits the entities in different modes to be smooth functions or curves, varying over a continuum, rather than a collection of unconnected responses. The existing functional PARAFAC methods handle functions of a one-dimensional argument (e.g., time) only. In this paper, we propose a new extension of functional PARAFAC for handling three-way data whose responses are sequenced along both a two-dimensional domain (e.g., a plane with x- and y-axis coordinates) and a one-dimensional argument. Technically, the proposed method combines PARAFAC with basis function expansion approximations, using a set of piecewise quadratic finite element basis functions for estimating two-dimensional smooth functions and a set of one-dimensional basis functions for estimating one-dimensional smooth functions. In a simulation study, the proposed method appeared to outperform the conventional PARAFAC. We apply the method to EEG data to demonstrate its empirical usefulness.
Land use, forest density, soil mapping, erosion, drainage, salinity limitations
NASA Technical Reports Server (NTRS)
Yassoglou, N. J. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The results of analyses show that it is possible to obtain information of practical significance as follows: (1) A quick and accurate estimate of the proper use of the valuable land can be made on the basis of temporal and spectral characteristics of the land features. (2) A rather accurate delineation of the major forest formations in the test areas was achieved on the basis of spatial and spectral characteristics of the studied areas. The forest stands were separated into two density classes; dense forest, and broken forest. On the basis of ERTS-1 data and the existing ground truth information a rather accurate mapping of the major vegetational forms of the mountain ranges can be made. (3) Major soil formations are mapable from ERTS-1 data: recent alluvial soils; soil on quarternary deposits; severely eroded soil and lithosol; and wet soils. (4) An estimation of cost benefits cannot be made accurately at this stage of the investigation. However, a rough estimate of the ratio of the cost for obtaining the same amount information from ERTS-1 data and from conventional operations would be approximately 1:6 to 1:10, in favor of the ERTS-1.
Feller, David; Vasiliu, Monica; Grant, Daniel J; Dixon, David A
2011-12-29
Structures, vibrational frequencies, atomization energies at 0 K, and heats of formation at 0 and 298 K are predicted for the compounds As(2), AsH, AsH(2), AsH(3), AsF, AsF(2), and AsF(3) from frozen core coupled cluster theory calculations performed with large correlation consistent basis sets, up through augmented sextuple zeta quality. The coupled cluster calculations involved up through quadruple excitations. For As(2) and the hydrides, it was also possible to examine the impact of full configuration interaction on some of the properties. In addition, adjustments were incorporated to account for extrapolation to the frozen core complete basis set limit, core/valence correlation, scalar relativistic effects, the diagonal Born-Oppenheimer correction, and atomic spin orbit corrections. Based on our best theoretical D(0)(As(2)) and the experimental heat of formation of As(2), we propose a revised 0 K arsenic atomic heat of formation of 68.86 ± 0.8 kcal/mol. While generally good agreement was found between theory and experiment, the heat of formation of AsF(3) was an exception. Our best estimate is more than 7 kcal/mol more negative than the single available experimental value, which argues for a re-examination of that measurement. © 2011 American Chemical Society
NASA Astrophysics Data System (ADS)
Ghomi, M.; Aamouche, A.; Cadioli, B.; Berthier, G.; Grajcar, L.; Baron, M. H.
1997-06-01
A complete set of vibrational spectra, obtained from several spectroscopic techniques, i.e. neutron inelastic scattering (NIS), Raman scattering and infrared absorption (IR), has been used in order to assign the vibrational modes of pyrimidine bases (uracil, thymine, cytosine) and their N-deuterated species. The spectra of solid and aqueous samples allowed us to analyse the effects of hydrogen bonding in crystal and in solution. In a first step, to assign the observed vibrational modes, we have resorted to harmonic quantum mechanical force field, calculated at SCF + MP2 level using double-zeta 6-31G and D95V basis sets with non-standard exponents for d-orbital polarisation functions. In order to improve the agreement between the experimental results obtained in condensed phases and the calculated ones based on isolated molecules, the molecular force field has been scaled. In a second step, to estimate the effect of intermolecular interactions on the vibrational dynamics of pyrimidine bases, we have undertaken additional calculations with the density functional theory (DFT) method using B3LYP functionals and polarised 6-31G basis sets. Two theoretical models have been considered: 1. a uracil embedded in a dielectric continuum ( ɛ = 78), and 2. a uracil H-bonded to two water molecules (through N1 and N3 atoms).
Federal policy documentation and geothermal water consumption: Policy gaps and needs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, J. N.; Harto, C. B.; Clark, C. E.
With U.S. geothermal power production expected to more than triple by 2040, and the majority of this growth expected to occur in arid and water-constrained areas, it is imperative that decision-makers understand the potential long-term limitations to and tradeoffs of geothermal development due to water availability. To this end, water consumption data, including documentation triggered by the National Environmental Policy Act (NEPA) of 1969, production and injection data, and water permit data, were collected from state and federal environmental policy sources in an effort to determine water consumption across the lifecycle of geothermal power plants. Values extracted from these sourcesmore » were analyzed to estimate water usage during well drilling; to identify sourcing of water for well drilling, well stimulation, and plant operations; and to estimate operational water usage at the plant level. Nevada data were also compared on a facility-by-facility basis with other publicly available water consumption data, to create a complete picture of water usage and consumption at these facilities. This analysis represents a unique method of capturing project-level water data for geothermal projects; however, a lack of statutory and legal requirements for such data and data quality result in significant data gaps, which are also explored« less
Systems Engineering Provides Successful High Temperature Steam Electrolysis Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles V. Park; Emmanuel Ohene Opare, Jr.
2011-06-01
This paper describes two Systems Engineering Studies completed at the Idaho National Laboratory (INL) to support development of the High Temperature Stream Electrolysis (HTSE) process. HTSE produces hydrogen from water using nuclear power and was selected by the Department of Energy (DOE) for integration with the Next Generation Nuclear Plant (NGNP). The first study was a reliability, availability and maintainability (RAM) analysis to identify critical areas for technology development based on available information regarding expected component performance. An HTSE process baseline flowsheet at commercial scale was used as a basis. The NGNP project also established a process and capability tomore » perform future RAM analyses. The analysis identified which components had the greatest impact on HTSE process availability and indicated that the HTSE process could achieve over 90% availability. The second study developed a series of life-cycle cost estimates for the various scale-ups required to demonstrate the HTSE process. Both studies were useful in identifying near- and long-term efforts necessary for successful HTSE process deployment. The size of demonstrations to support scale-up was refined, which is essential to estimate near- and long-term cost and schedule. The life-cycle funding profile, with high-level allocations, was identified as the program transitions from experiment scale R&D to engineering scale demonstration.« less
[Microbiological Surveillance of Measles and Rubella in Spain. Laboratory Network].
Echevarría, Juan Emilio; Fernández García, Aurora; de Ory, Fernando
2015-01-01
The Laboratory is a fundamental component on the surveillance of measles and rubella. Cases need to be properly confirmed to ensure an accurate estimation of the incidence. Strains should be genetically characterized to know the transmission pattern of these viruses and frequently, outbreaks and transmission chains can be totally discriminated only after that. Finally, the susceptibility of the population is estimated on the basis of sero-prevalence surveys. Detection of specific IgM response is the base of the laboratory diagnosis of these diseases. It should be completed with genomic detection by RT-PCR to reach an optimal efficiency, especially when sampling is performed early in the course of the disease. Genotyping is performed by genomic sequencing according to reference protocols of the WHO. Laboratory surveillance of measles and rubella in Spain is organized as a net of regional laboratories with different capabilities. The National Center of Microbiology as National Reference Laboratory (NRL), supports regional laboratories ensuring the availability of all required techniques in the whole country and watching for the quality of the results. The NRL is currently working in the implementation of new molecular techniques based on the analysis of genomic hypervariable regions for the strain characterization at sub-genotypic levels and use them in the surveillance.
Shi, Jiajia; Liu, Yuhai; Guo, Ran; Li, Xiaopei; He, Anqi; Gao, Yunlong; Wei, Yongju; Liu, Cuige; Zhao, Ying; Xu, Yizhuang; Noda, Isao; Wu, Jinguang
2015-11-01
A new concentration series is proposed for the construction of a two-dimensional (2D) synchronous spectrum for orthogonal sample design analysis to probe intermolecular interaction between solutes dissolved in the same solutions. The obtained 2D synchronous spectrum possesses the following two properties: (1) cross peaks in the 2D synchronous spectra can be used to reflect intermolecular interaction reliably, since interference portions that have nothing to do with intermolecular interaction are completely removed, and (2) the two-dimensional synchronous spectrum produced can effectively avoid accidental collinearity. Hence, the correct number of nonzero eigenvalues can be obtained so that the number of chemical reactions can be estimated. In a real chemical system, noise present in one-dimensional spectra may also produce nonzero eigenvalues. To get the correct number of chemical reactions, we classified nonzero eigenvalues into significant nonzero eigenvalues and insignificant nonzero eigenvalues. Significant nonzero eigenvalues can be identified by inspecting the pattern of the corresponding eigenvector with help of the Durbin-Watson statistic. As a result, the correct number of chemical reactions can be obtained from significant nonzero eigenvalues. This approach provides a solid basis to obtain insight into subtle spectral variations caused by intermolecular interaction.
Heat and mass transport during a groundwater replenishment trial in a highly heterogeneous aquifer
NASA Astrophysics Data System (ADS)
Seibert, Simone; Prommer, Henning; Siade, Adam; Harris, Brett; Trefry, Mike; Martin, Michael
2014-12-01
Changes in subsurface temperature distribution resulting from the injection of fluids into aquifers may impact physiochemical and microbial processes as well as basin resource management strategies. We have completed a 2 year field trial in a hydrogeologically and geochemically heterogeneous aquifer below Perth, Western Australia in which highly treated wastewater was injected for large-scale groundwater replenishment. During the trial, chloride and temperature data were collected from conventional monitoring wells and by time-lapse temperature logging. We used a joint inversion of these solute tracer and temperature data to parameterize a numerical flow and multispecies transport model and to analyze the solute and heat propagation characteristics that prevailed during the trial. The simulation results illustrate that while solute transport is largely confined to the most permeable lithological units, heat transport was also affected by heat exchange with lithological units that have a much lower hydraulic conductivity. Heat transfer by heat conduction was found to significantly influence the complex temporal and spatial temperature distribution, especially with growing radial distance and in aquifer sequences with a heterogeneous hydraulic conductivity distribution. We attempted to estimate spatially varying thermal transport parameters during the data inversion to illustrate the anticipated correlations of these parameters with lithological heterogeneities, but estimates could not be uniquely determined on the basis of the collected data.
Oil shale resources of the Uinta Basin, Utah and Colorado
,
2010-01-01
The U.S. Geological Survey (USGS) recently completed a comprehensive assessment of in-place oil in oil shales of the Eocene Green River Formation of the Uinta Basin of eastern Utah and western Colorado. The oil shale interval was subdivided into eighteen roughly time-stratigraphic intervals, and each interval was assessed for variations in gallons per ton, barrels per acre, and total barrels in each township. The Radial Basis Function extrapolation method was used to generate isopach and isoresource maps, and to calculate resources. The total inplace resource for the Uinta Basin is estimated at 1.32 trillion barrels. This is only slightly lower than the estimated 1.53 trillion barrels for the adjacent Piceance Basin, Colorado, to the east, which is thought to be the richest oil shale deposit in the world. However, the area underlain by oil shale in the Uinta Basin is much larger than that of the Piceance Basin, and the average gallons per ton and barrels per acre values for each of the assessed oil shale zones are significantly lower in the depocenter in the Uinta Basin when compared to the Piceance Basin. These relations indicate that the oil shale resources in the Uinta Basin are of lower grade and are more dispersed than the oil shale resources of the Piceance Basin.
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
Fiber Orientation Estimation Guided by a Deep Network.
Ye, Chuyang; Prince, Jerry L
2017-09-01
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for noninvasively imaging the brain's white matter tracts. The fiber orientation (FO) is a key feature computed from dMRI for tract reconstruction. Because the number of FOs in a voxel is usually small, dictionary-based sparse reconstruction has been used to estimate FOs. However, accurate estimation of complex FO configurations in the presence of noise can still be challenging. In this work we explore the use of a deep network for FO estimation in a dictionary-based framework and propose an algorithm named Fiber Orientation Reconstruction guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a smaller dictionary encoding coarse basis FOs to represent diffusion signals. To estimate the mixture fractions of the dictionary atoms, a deep network is designed to solve the sparse reconstruction problem. Second, the coarse FOs inform the final FO estimation, where a larger dictionary encoding a dense basis of FOs is used and a weighted ℓ 1 -norm regularized least squares problem is solved to encourage FOs that are consistent with the network output. FORDN was evaluated and compared with state-of-the-art algorithms that estimate FOs using sparse reconstruction on simulated and typical clinical dMRI data. The results demonstrate the benefit of using a deep network for FO estimation.
Near Hartree-Fock quality GTO basis sets for the second-row atoms
NASA Technical Reports Server (NTRS)
Partridge, Harry
1987-01-01
Energy optimized, near Hartree-Fock quality Gaussian basis sets ranging in size from (17s12p) to (20s15p) are presented for the ground states of the second-row atoms for Na(2P), Na(+), Na(-), Mg(3P), P(-), S(-), and Cl(-). In addition, optimized supplementary functions are given for the ground state basis sets to describe the negative ions, and the excited Na(2P) and Mg(3P) atomic states. The ratios of successive orbital exponents describing the inner part of the 1s and 2p orbitals are found to be nearly independent of both nuclear charge and basis set size. This provides a method of obtaining good starting estimates for other basis set optimizations.
Growth and decay losses in Colorado aspen
Thomas E. Wengert Hinds
1977-01-01
Decay in Colorado aspen, Populus tremuloides Michx., was extensively surveyed in 1954-56, but volume estimates were presented on a cubic foot basis. This paper reanalyzes the data on a board foot (Scribner) basis. Tree growth and gross and net volumes per acre expected on commercial aspen sites are given. Decay volumes are correlatzd with site class...
The physical basis for estimating wave-energy spectra with the radar ocean-wave spectrometer
NASA Technical Reports Server (NTRS)
Jackson, Frederick C.
1987-01-01
The derivation of the reflectivity modulation spectrum of the sea surface for near-nadir-viewing microwave radars using geometrical optics is described. The equations required for the derivation are presented. The derived reflectivity modulation spectrum provides data on the physical basis of the radar ocean-wave spectrometer measurements of ocean-wave directional spectra.
Grigor'eva, L A
2012-01-01
Some criteria for the estimation of the biological and calendar age by the fat storage in midgut cells of Ixodes persulcatus males were established on the basis of examination of ticks from the laboratory culture.
EPA Assessment of Risks from Radon in Homes
This 2003 document will serve as a technical basis for EPA’s estimates of risk from radon in homes. It provides estimates of the risk per unit exposure and projects the number of fatal lung cancers occurring in the U.S. population each year due to radon.
Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin
2003-01-01
A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...
Antti T. Kaartinen; Jeremy S. Fried; Paul A. Dunham
2002-01-01
Three Landsat TM-based GIS layers were evaluated as alternatives to conventional, photointerpretation-based stratification of FIA field plots. Estimates for timberland area, timber volume, and volume of down wood were calculated for California's North Coast Survey Unit of 2.5 million hectares. The estimates were compared on the basis of standard errors,...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Upon what basis may an organization responsible for the supervision of a national service participant certify that the individual successfully completed a term of service? 2526.15 Section 2526.15 Public Welfare Regulations Relating to Public Welfare (Continued) CORPORATION FOR NATIONAL AND COMMUNIT...
Saturated-unsaturated flow to a well with storage in a compressible unconfined aquifer
NASA Astrophysics Data System (ADS)
Mishra, Phoolendra Kumar; Neuman, Shlomo P.
2011-05-01
Mishra and Neuman (2010) developed an analytical solution for flow to a partially penetrating well of zero radius in a compressible unconfined aquifer that allows inferring its saturated and unsaturated hydraulic properties from responses recorded in the saturated and/or unsaturated zones. Their solution accounts for horizontal as well as vertical flows in each zone. It represents unsaturated zone constitutive properties in a manner that is at once mathematically tractable and sufficiently flexible to provide much improved fits to standard constitutive models. In this paper we extend the solution of [2010] to the case of a finite diameter pumping well with storage; investigate the effects of storage in the pumping well and delayed piezometer response on drawdowns in the saturated and unsaturated zones as functions of position and time; validate our solution against numerical simulations of drawdown in a synthetic aquifer having unsaturated properties described by the [1980]- [1976] model; use our solution to analyze 11 transducer-measured drawdown records from a seven-day pumping test conducted by University of Waterloo researchers at the Canadian Forces Base Borden in Ontario, Canada; validate our parameter estimates against manually-measured drawdown records in 14 other piezometers at Borden; and compare (a) our estimates of aquifer parameters with those obtained on the basis of all these records by [2008], (b) on the basis of 11 transducer-measured drawdown records by [2007], (c) our estimates of van Genuchten-Mualem parameters with those obtained on the basis of laboratory drainage data from the site by [1992], and (d) our corresponding prediction of how effective saturation varies with elevation above the initial water table under static conditions with a profile based on water contents measured in a neutron access tube at a radial distance of about 5 m from the center of the pumping well. We also use our solution to analyze 11 transducer-measured drawdown records from a 7 day pumping test conducted by University of Waterloo researchers at the Canadian Forces Base Borden in Ontario, Canada. We validate our parameter estimates against manually measured drawdown records in 14 other piezometers at Borden. We compare our estimates of aquifer parameters with those obtained on the basis of all these records by Moench (2008) and on the basis of 11 transducer-measured drawdown records by Endres et al. (2007), and we compare our estimates of van Genuchten-Mualem parameters with those obtained on the basis of laboratory drainage data from the site by Akindunni and Gillham (1992); finally, we compare our corresponding prediction of how effective saturation varies with elevation above the initial water table under static conditions with a profile based on water contents measured in a neutron access tube at a radial distance of about 5 m from the center of the pumping well.
Measuring diet cost at the individual level: a comparison of three methods
Monsivais, P; Perrigue, M M; Adams, S L; Drewnowski, A
2013-01-01
Background/objectives: Household-level food spending data are not suitable for population-based studies of the economics of nutrition. This study compared three methods of deriving diet cost at the individual level. Subjects/methods: Adult men and women (n=164) completed 4-day diet diaries and a food frequency questionnaire (FFQ). Food expenditures over 4 weeks and supermarket prices for 384 foods were obtained. Diet costs (US$/day) were estimated using: (1) diet diaries and expenditures; (2) diet diaries and supermarket prices; and (3) FFQs and supermarket prices. Agreement between the three methods was assessed on the basis of Pearson correlations and limits of agreement. Income-related differences in diet costs were estimated using general linear models. Results: Diet diaries yielded mean (s.d.) diet costs of $10.04 (4.27) based on Method 1 and $8.28 (2.32) based on Method 2. FFQs yielded mean diet costs of $7.66 (2.72) based on Method 3. Correlations between energy intakes and costs were highest for Method 3 (r2=0.66), lower for Method 2 (r2=0.24) and lowest for Method 1 (r2=0.06). Cost estimates were significantly associated with household incomes. Conclusion: The weak association between food expenditures and food intake using Method 1 makes it least suitable for diet and health research. However, merging supermarket food prices with standard dietary assessment tools can provide estimates of individual diet cost that are more closely associated with food consumed. The derivation of individual diet cost can provide insights into some of the economic determinants of food choice, diet quality and health. PMID:24045791
Code of Federal Regulations, 2010 CFR
2010-07-01
... and determination on application of criteria. 1956.23 Section 1956.23 Labor Regulations Relating to... Approval Procedures § 1956.23 Procedures for certification of completion of development and determination... basis of actual operations, the criteria set forth in §§ 1956.10 and 1956.11 of this part are being...
Forest statistics for Northeast Florida 1970
Joe P. McClure
1970-01-01
This report highlights the principal findings of the fourth Forest Survey of the timber resource in Northeast Florida. The survey was started in February 1969 and completed in November 1969. Findings of the three previous surveys, completed in 1934, 1949, and 1959, provide the basis for measuring changes that have occurred and trends that have developed over the...
Forest statistics for Northwest Florida, 1969
Herbert A. Knight
1969-01-01
This report highlights the principal findings of the fourth Forest survey of the timber resource in Northwest Florida. The survey was started in July 19 68 and completed in March 1969. Findings of the three previous surveys, completed in 1934, 1949, and 1959, provide the basis for measuring changes that have occurred and trends that have developed over the past 35...
Forest statistics for South Florida, 1970
Thomas R. Bellamy; Herbert A. Knight
1970-01-01
This report highlights the principal findings of the fourth Forest Survey of the timber resource in South Florida. The survey was started in February 1970 and completed in March 1970. Findings of the three previous surveys, completed in 1936, 1949, and 1959, provide the basis for measuring changes that have occurred and trends that have developed over the past 34...
76 FR 55719 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-08
...-4K paper notice, an Electronic Data Interchange (EDI) version of the Form Letter ID-4K notice, or an... completed ID-4K back to the RRB, or electronically via EDI or ERS. Completion is voluntary. No changes are...-4E paper notice and the EDI and Internet equivalent versions are transmitted on a daily basis...
Forest statistics for the Northern Coastal Plain of South Carolina 1968
Richard L. Welch
1968-01-01
This report presents the principal findings of the fourth Forest Survey in the Piedmont of South Carolina, completed in February 1967. Findings of the three earlier surveys, completed in 1936, 1947, and 1958, provide the basis for measuring the changes that have occurred and the trends that have developed over the past 30 years.
Forest statistics for the Southern Coastal Plain of South Carolina
Noel D. Cost
1968-01-01
This report presents the principal findings of the fourth Forest Survey in the Piedmont of South Carolina, completed in February 1967. Findings of the three earlier surveys, completed in 1936, 1947, and 1958, provide the basis for measuring the changes that have occurred and the trends that have developed over the past 30 years.
38 CFR 4.87a - Schedule of ratings-other sense organs.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ratings—other sense organs. Rating 6275Sense of smell, complete loss 10 6276Sense of taste, complete loss 10 Note: Evaluation will be assigned under diagnostic codes 6275 or 6276 only if there is an anatomical or pathological basis for the condition. (Authority: 38 U.S.C. 1155) [64 FR 25210, May 11, 1999...
Measurements of methane emissions at natural gas production sites in the United States.
Allen, David T; Torres, Vincent M; Thomas, James; Sullivan, David W; Harrison, Matthew; Hendler, Al; Herndon, Scott C; Kolb, Charles E; Fraser, Matthew P; Hill, A Daniel; Lamb, Brian K; Miskimins, Jennifer; Sawyer, Robert F; Seinfeld, John H
2013-10-29
Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67-3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ± 200 Gg). The estimate for comparable source categories in the EPA national inventory is ~1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production).
Measurements of methane emissions at natural gas production sites in the United States
Allen, David T.; Torres, Vincent M.; Thomas, James; Sullivan, David W.; Harrison, Matthew; Hendler, Al; Herndon, Scott C.; Kolb, Charles E.; Fraser, Matthew P.; Hill, A. Daniel; Lamb, Brian K.; Miskimins, Jennifer; Sawyer, Robert F.; Seinfeld, John H.
2013-01-01
Engineering estimates of methane emissions from natural gas production have led to varied projections of national emissions. This work reports direct measurements of methane emissions at 190 onshore natural gas sites in the United States (150 production sites, 27 well completion flowbacks, 9 well unloadings, and 4 workovers). For well completion flowbacks, which clear fractured wells of liquid to allow gas production, methane emissions ranged from 0.01 Mg to 17 Mg (mean = 1.7 Mg; 95% confidence bounds of 0.67–3.3 Mg), compared with an average of 81 Mg per event in the 2011 EPA national emission inventory from April 2013. Emission factors for pneumatic pumps and controllers as well as equipment leaks were both comparable to and higher than estimates in the national inventory. Overall, if emission factors from this work for completion flowbacks, equipment leaks, and pneumatic pumps and controllers are assumed to be representative of national populations and are used to estimate national emissions, total annual emissions from these source categories are calculated to be 957 Gg of methane (with sampling and measurement uncertainties estimated at ±200 Gg). The estimate for comparable source categories in the EPA national inventory is ∼1,200 Gg. Additional measurements of unloadings and workovers are needed to produce national emission estimates for these source categories. The 957 Gg in emissions for completion flowbacks, pneumatics, and equipment leaks, coupled with EPA national inventory estimates for other categories, leads to an estimated 2,300 Gg of methane emissions from natural gas production (0.42% of gross gas production). PMID:24043804
A rotor-aerodynamics-based wind estimation method using a quadrotor
NASA Astrophysics Data System (ADS)
Song, Yao; Luo, Bing; Meng, Qing-Hao
2018-02-01
Attempts to estimate horizontal wind by the quadrotor are reviewed. Wind estimations are realized by utilizing the quadrotor’s thrust change, which is caused by the wind’s effect on the rotors. The basis of the wind estimation method is the aerodynamic formula for the rotor’s thrust, which is verified and calibrated by experiments. A hardware-in-the-loop simulation (HILS) system was built as a testbed; its dynamic model and control structure are demonstrated. Verification experiments on the HILS system proved that the wind estimation method was effective.
Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie
2016-12-01
Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Kun; Zhao Hongmei; Wang Caixia
Bromoiodomethane photodissociation in the low-lying excited states has been characterized using unrestricted Hartree-Fock, configuration-interaction-singles, and complete active space self-consistent field calculations with the SDB-aug-cc-pVTZ, aug-cc-pVTZ, and 3-21g** basis sets. According to the results of the vertical excited energies and oscillator strengths of these low-lying excited states, bond selectivity is predicted. Subsequently, the minimum energy paths of the first excited singlet state and the third excited state for the dissociation reactions were calculated using the complete active space self-consistent field method with 3-21g** basis set. Good agreement is found between the calculations and experimental data. The relationships of excitations, the electronicmore » structures at Franck-Condon points, and bond selectivity are discussed.« less
Wielgosz, Andreas; Robinson, Christopher; Mao, Yang; Jiang, Ying; Campbell, Norm R C; Muthuri, Stella; Morrison, Howard
2016-06-01
The standard for population-based surveillance of dietary sodium intake is 24-hour urine testing; however, this may be affected by incomplete urine collection. The impact of different indirect methods of assessing completeness of collection on estimated sodium ingestion has not been established. The authors enlisted 507 participants from an existing community study in 2009 to collect 24-hour urine samples. Several methods of assessing completeness of urine collection were tested. Mean sodium intake varied between 3648 mg/24 h and 7210 mg/24 h depending on the method used. Excluding urine samples collected for longer or shorter than 24 hours increased the estimated urine sodium excretion, even when corrections for the variation in timed collections were applied. Until an accurate method of indirectly assessing completeness of urine collection is identified, the gold standard of administering para-aminobenzoic acid is recommended. Efforts to ensure participants collect complete urine samples are also warranted. ©2015 Wiley Periodicals, Inc.
Buckwalter, T.F.; Squillace, P.J.
1995-01-01
Hydrologic data were evaluated from four areas of western Pennsylvania to estimate the minimum depth of well surface casing needed to prevent contamination of most of the fresh ground-water resources by oil and gas wells. The areas are representative of the different types of oil and gas activities and of the ground-water hydrology of most sections of the Appalachian Plateaus Physiographic Province in western Pennsylvania. Approximate delineation of the base of the fresh ground-water system was attempted by interpreting the following hydrologic data: (1) reports of freshwater and saltwater in oil and gas well-completion reports, (2) water well-completion reports, (3) geophysical logs, and (4) chemical analyses of well water. Because of the poor quality and scarcity of ground-water data, the altitude of the base of the fresh ground-water system in the four study areas cannot be accurately delineated. Consequently, minimum surface-casing depths for oil and gas wells cannot be estimated with confidence. Conscientious and reliable reporting of freshwater and saltwater during drilling of oil and gas wells would expand the existing data base. Reporting of field specific conductance of ground water would greatly enhance the value of the reports of ground water in oil and gas well-completion records. Water-bearing zones in bedrock are controlled mostly by the presence of secondary openings. The vertical and horizontal discontinuity of secondary openings may be responsible, in part, for large differences in altitudes of freshwater zones noted on completion records of adjacent oil and gas wells. In upland and hilltop topographies, maximum depths of fresh ground water are reported from several hundred feet below land surface to slightly more than 1,000 feet, but the few deep reports are not substantiated by results of laboratory analyses of dissolved-solids concentrations. Past and present drillers for shallow oil and gas wells commonly install surface casing to below the base of readily observed fresh ground water. Casing depths are selected generally to maximize drilling efficiency and to stop freshwater from entering the well and subsequently interfering with hydrocarbon recovery. The depths of surface casing generally are not selected with ground-water protection in mind. However, on the basis of existing hydrologic data, most freshwater aquifers generally are protected with current casing depths. Minimum surface-casing depths for deep gas wells are prescribed by Pennsylvania Department of Environmental Resources regulations and appear to be adequate to prevent ground-water contamination, in most respects, for the only study area with deep gas fields examined in Crawford County.
Rossi, Carla
2013-06-01
The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.
The South African Tuberculosis Care Cascade: Estimated Losses and Methodological Challenges
Naidoo, Pren; Theron, Grant; Rangaka, Molebogeng X; Chihota, Violet N; Vaughan, Louise; Brey, Zameer O; Pillay, Yogan
2017-01-01
Abstract Background While tuberculosis incidence and mortality are declining in South Africa, meeting the goals of the End TB Strategy requires an invigorated programmatic response informed by accurate data. Enumerating the losses at each step in the care cascade enables appropriate targeting of interventions and resources. Methods We estimated the tuberculosis burden; the number and proportion of individuals with tuberculosis who accessed tests, had tuberculosis diagnosed, initiated treatment, and successfully completed treatment for all tuberculosis cases, for those with drug-susceptible tuberculosis (including human immunodeficiency virus (HIV)–coinfected cases) and rifampicin-resistant tuberculosis. Estimates were derived from national electronic tuberculosis register data, laboratory data, and published studies. Results The overall tuberculosis burden was estimated to be 532005 cases (range, 333760–764480 cases), with successful completion of treatment in 53% of cases. Losses occurred at multiple steps: 5% at test access, 13% at diagnosis, 12% at treatment initiation, and 17% at successful treatment completion. Overall losses were similar among all drug-susceptible cases and those with HIV coinfection (54% and 52%, respectively, successfully completed treatment). Losses were substantially higher among rifampicin- resistant cases, with only 22% successfully completing treatment. Conclusion Although the vast majority of individuals with tuberculosis engaged the public health system, just over half were successfully treated. Urgent efforts are required to improve implementation of existing policies and protocols to close gaps in tuberculosis diagnosis, treatment initiation, and successful treatment completion. PMID:29117342
Allometric scaling theory applied to FIA biomass estimation
David C. Chojnacky
2002-01-01
Tree biomass estimates in the Forest Inventory and Analysis (FIA) database are derived from numerous methodologies whose abundance and complexity raise questions about consistent results throughout the U.S. A new model based on allometric scaling theory ("WBE") offers simplified methodology and a theoretically sound basis for improving the reliability and...
High-throughput exposure modeling to support prioritization of chemicals in personal care products
We demonstrate the application of a high-throughput modeling framework to estimate exposure to chemicals used in personal care products (PCPs). As a basis for estimating exposure, we use the product intake fraction (PiF), defined as the mass of chemical taken by an individual or ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Adah S.; Ostrom, Quinn T.; Kruchko, Carol
Complete prevalence proportions illustrate the burden of disease in a population. Here, this study estimates the 2010 complete prevalence of malignant primary brain tumors overall and by Central Brain Tumor Registry of the United States (CBTRUS) histology groups, and compares the brain tumor prevalence estimates to the complete prevalence of other common cancers as determined by the Surveillance, Epidemiology, and End Results Program (SEER) by age at prevalence (2010): children (0–14 y), adolescent and young adult (AYA) (15–39 y), and adult (40+ y).
Zhang, Adah S.; Ostrom, Quinn T.; Kruchko, Carol; ...
2016-12-29
Complete prevalence proportions illustrate the burden of disease in a population. Here, this study estimates the 2010 complete prevalence of malignant primary brain tumors overall and by Central Brain Tumor Registry of the United States (CBTRUS) histology groups, and compares the brain tumor prevalence estimates to the complete prevalence of other common cancers as determined by the Surveillance, Epidemiology, and End Results Program (SEER) by age at prevalence (2010): children (0–14 y), adolescent and young adult (AYA) (15–39 y), and adult (40+ y).
Hu, Yu; Chen, Yaping
2017-07-11
Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutta, S.; Saha, J. K.; Chandra, R.
The Rayleigh-Ritz variational technique with a Hylleraas basis set is being tested for the first time to estimate the structural modifications of a lithium atom embedded in a weakly coupled plasma environment. The Debye-Huckel potential is used to mimic the weakly coupled plasma environment. The wave functions for both the helium-like lithium ion and the lithium atom are expanded in the explicitly correlated Hylleraas type basis set which fully takes care of the electron-electron correlation effect. Due to the continuum lowering under plasma environment, the ionization potential of the system gradually decreases leading to the destabilization of the atom. Themore » excited states destabilize at a lower value of the plasma density. The estimated ionization potential agrees fairly well with the few available theoretical estimates. The variation of one and two particle moments, dielectric susceptibility and magnetic shielding constant, with respect to plasma density is also been discussed in detail.« less
Li, Meina; Kwak, Keun-Chang; Kim, Youn Tae
2016-01-01
Conventionally, indirect calorimetry has been used to estimate oxygen consumption in an effort to accurately measure human body energy expenditure. However, calorimetry requires the subject to wear a mask that is neither convenient nor comfortable. The purpose of our study is to develop a patch-type sensor module with an embedded incremental radial basis function neural network (RBFNN) for estimating the energy expenditure. The sensor module contains one ECG electrode and a three-axis accelerometer, and can perform real-time heart rate (HR) and movement index (MI) monitoring. The embedded incremental network includes linear regression (LR) and RBFNN based on context-based fuzzy c-means (CFCM) clustering. This incremental network is constructed by building a collection of information granules through CFCM clustering that is guided by the distribution of error of the linear part of the LR model. PMID:27669249
Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design
Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.
2017-07-27
We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less
NASA Astrophysics Data System (ADS)
Bazlov, A. I.; Tsarkov, A. A.; Ketov, S. V.; Suryanarayana, C.; Louzguine-Luzgin, D. V.
2018-02-01
Effect of multiple alloying elements on the glass-forming ability, thermal stability, and crystallization behavior of Zr-based glass-forming alloys were studied in the present work. We investigated the effect of complete or partial substitution of Ti and Ni with similar early and late transition metals, respectively, on the glass-forming ability and crystallization behavior of the Zr50Ti10Cu20Ni10Al10 alloy. Poor correlation was observed between different parameters indicating the glass-forming ability and the critical size of the obtained glassy samples. Importance of the width of the crystallization interval is emphasized. The kinetics of primary crystallization, i.e., the rate of nucleation and rate of growth of the nuclei of primary crystals is very different from that of the eutectic alloys. Thus, it is difficult to estimate the glass-forming ability only on the basis of the empirical parameters not taking into account the crystallization behavior and the crystallization interval.
Data dependent systems approach to modal analysis Part 1: Theory
NASA Astrophysics Data System (ADS)
Pandit, S. M.; Mehta, N. P.
1988-05-01
The concept of Data Dependent Systems (DDS) and its applicability in the context of modal vibration analysis is presented. The ability of the DDS difference equation models to provide a complete representation of a linear dynamic system from its sampled response data forms the basis of the approach. The models are decomposed into deterministic and stochastic components so that system characteristics are isolated from noise effects. The modelling strategy is outlined, and the method of analysis associated with modal parameter identification is described in detail. Advantages and special features of the DDS methodology are discussed. Since the correlated noise is appropriately and automatically modelled by the DDS, the modal parameters are shown to be estimated very accurately and hence no preprocessing of the data is needed. Complex mode shapes and non-classical damping are as easily analyzed as the classical normal mode analysis. These features are illustrated by using simulated data in this Part I and real data on a disc-brake rotor in Part II.
Dynamic simulation of 10 kW Brayton cryocooler for HTS cable
NASA Astrophysics Data System (ADS)
Chang, Ho-Myung; Park, Chan Woo; Yang, Hyung Suk; Hwang, Si Dole
2014-01-01
Dynamic simulation of a Brayton cryocooler is presented as a partial effort of a Korean governmental project to develop 1˜3 km HTS cable systems at transmission level in Jeju Island. Thermodynamic design of a 10 kW Brayton cryocooler was completed, and a prototype construction is underway with a basis of steady-state operation. This study is the next step to investigate the transient behavior of cryocooler for two purposes. The first is to simulate and design the cool-down process after scheduled or unscheduled stoppage. The second is to predict the transient behavior following the variation of external conditions such as cryogenic load or outdoor temperature. The detailed specifications of key components, including plate-fin heat exchangers and cryogenic turbo-expanders are incorporated into a commercial software (Aspen HYSYS) to estimate the temporal change of temperature and flow rate over the cryocooler. An initial cool-down scenario and some examples on daily variation of cryocooler are presented and discussed, aiming at stable control schemes of a long cable system.
Development of the hybrid sulfur cycle for use with concentrated solar heat. I. Conceptual design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorensek, Maximilian B.; Corgnale, Claudio; Summers, William A.
We propose a detailed conceptual design of a solar hybrid sulfur (HyS) cycle. Numerous design tradeoffs, including process operating conditions and strategies, methods of integration with solar energy sources, and solar design options were considered. A baseline design was selected, and process flowsheets were developed. Pinch analyses were performed to establish the limiting energy efficiency. Detailed material and energy balances were completed, and a full stream table prepared. Design assumptions include use of: location in the southwest US desert, falling particle concentrated solar receiver, indirect heat transfer via pressurized helium, continuous operation with thermal energy storage, liquid-fed electrolyzer with PBImore » membrane, and bayonet-type acid decomposer. Thermochemical cycle efficiency for the HyS process was estimated to be 35.0%, LHV basis. The solar-to-hydrogen (STH) energy conversion ratio was 16.9%. This thus exceeds the Year 2015 DOE STCH target of STH >10%, and shows promise for meeting the Year 2020 target of 20%.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azadi, Sam, E-mail: s.azadi@ucl.ac.uk; Cohen, R. E.
We report an accurate study of interactions between benzene molecules using variational quantum Monte Carlo (VMC) and diffusion quantum Monte Carlo (DMC) methods. We compare these results with density functional theory using different van der Waals functionals. In our quantum Monte Carlo (QMC) calculations, we use accurate correlated trial wave functions including three-body Jastrow factors and backflow transformations. We consider two benzene molecules in the parallel displaced geometry, and find that by highly optimizing the wave function and introducing more dynamical correlation into the wave function, we compute the weak chemical binding energy between aromatic rings accurately. We find optimalmore » VMC and DMC binding energies of −2.3(4) and −2.7(3) kcal/mol, respectively. The best estimate of the coupled-cluster theory through perturbative triplets/complete basis set limit is −2.65(2) kcal/mol [Miliordos et al., J. Phys. Chem. A 118, 7568 (2014)]. Our results indicate that QMC methods give chemical accuracy for weakly bound van der Waals molecular interactions, comparable to results from the best quantum chemistry methods.« less
Spectroscopic studies of Wolf-Rayet stars. III - The WC subclass
NASA Technical Reports Server (NTRS)
Torres, A. V.; Conti, P. S.; Massey, P.
1986-01-01
Wolf-Rayet (W-R) stars, which are the descendants of massive O-type stars, can be subdivided into three groups depending on their spectral appearance. These groups include the nitrogen class (WN), the carbon class (WC), and the oxygen class (WO). The present paper is concerned with the WC stars. The assignment of WC subtypes has been based on visual inspections of photographic plates. One of the aims of this study is related to the quantification of the visual estimates. The measured ratios of equivalent widths and the FWHM of the 4650 A line for Galactic and LMC stars are presented, and the reclassification of some stars is proposed on this basis. In particular, it is shown that the majority of the LMC WC stars should logically be classified WC4 instead of WC5. Comments on individual stars are provided, and terminal velocities are discussed. It is attempted to give a complete overview of the most important spectroscopic features of the WC stars in the optical region.
Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method
Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni
2017-01-01
The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508
76 FR 62098 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-06
... public comment on new or revised data collections, the Railroad Retirement Board (RRB) will publish... G-346. The RRB estimates that 4,830 G-346's will be completed annually at an estimated completion... Form G-346sum, which will mirror the information collected on Form G-346, will be used when an employee...
The Analysis of Completely Randomized Factorial Experiments When Observations Are Lost at Random.
ERIC Educational Resources Information Center
Hummel, Thomas J.
An investigation was conducted of the characteristics of two estimation procedures and corresponding test statistics used in the analysis of completely randomized factorial experiments when observations are lost at random. For one estimator, contrast coefficients for cell means did not involve the cell frequencies. For the other, contrast…
25 CFR 700.121 - Statement of the basis for the determination of fair market value.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Statement of the basis for the determination of fair market value. At the time of the initiation of... that such amount— (1) Is the full amount believed by the Commission to be just compensation for the... apportionment of the total estimated just compensation for the partial acquisition will be made. In the event...
25 CFR 700.121 - Statement of the basis for the determination of fair market value.
Code of Federal Regulations, 2011 CFR
2011-04-01
... apportionment of the total estimated just compensation for the partial acquisition will be made. In the event... Statement of the basis for the determination of fair market value. At the time of the initiation of... that such amount— (1) Is the full amount believed by the Commission to be just compensation for the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Harrison, Robert J.; Dixon, David A.
A parallel implementation of the spin-free one-electron Douglas-Kroll(-Hess) Hamiltonian (DKH) in NWChem is discussed. An efficient and accurate method to calculate DKH gradients is introduced. It is shown that the use of standard (non-relativistic) contracted basis set can produce erroneous results for elements beyond the first row elements. The generation of DKH contracted cc-pVXZ (X = D, T, Q, 5) basis sets for H, He, B - Ne, Al - Ar, and Ga - Br will be discussed.
DeFelice, Nicholas B; Johnston, Jill E; Gibson, Jacqueline MacDonald
2015-08-18
The magnitude and spatial variability of acute gastrointestinal illness (AGI) cases attributable to microbial contamination of U.S. community drinking water systems are not well characterized. We compared three approaches (drinking water attributable risk, quantitative microbial risk assessment, and population intervention model) to estimate the annual number of emergency department visits for AGI attributable to microorganisms in North Carolina community water systems. All three methods used 2007-2013 water monitoring and emergency department data obtained from state agencies. The drinking water attributable risk method, which was the basis for previous U.S. Environmental Protection Agency national risk assessments, estimated that 7.9% of annual emergency department visits for AGI are attributable to microbial contamination of community water systems. However, the other methods' estimates were more than 2 orders of magnitude lower, each attributing 0.047% of annual emergency department visits for AGI to community water system contamination. The differences in results between the drinking water attributable risk method, which has been the main basis for previous national risk estimates, and the other two approaches highlight the need to improve methods for estimating endemic waterborne disease risks, in order to prioritize investments to improve community drinking water systems.
J-adaptive estimation with estimated noise statistics
NASA Technical Reports Server (NTRS)
Jazwinski, A. H.; Hipkins, C.
1973-01-01
The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.
The silent mass extinction of insect herbivores in biodiversity hotspots.
Fonseca, Carlos Roberto
2009-12-01
Habitat loss is silently leading numerous insects to extinction. Conservation efforts, however, have not been designed specifically to protect these organisms, despite their ecological and evolutionary significance. On the basis of species-host area equations, parameterized with data from the literature and interviews with botanical experts, I estimated the number of specialized plant-feeding insects (i.e., monophages) that live in 34 biodiversity hotspots and the number committed to extinction because of habitat loss. I estimated that 795,971-1,602,423 monophagous insect species live in biodiversity hotspots on 150,371 endemic plant species, which is 5.3-10.6 monophages per plant species. I calculated that 213,830-547,500 monophagous species are committed to extinction in biodiversity hotspots because of reduction of the geographic range size of their endemic hosts. I provided rankings of biodiversity hotspots on the basis of estimated richness of monophagous insects and on estimated number of extinctions of monophagous species. Extinction rates were predicted to be higher in biodiversity hotspots located along strong environmental gradients and on archipelagos, where high spatial turnover of monophagous species along the geographic distribution of their endemic plants is likely. The results strongly support the overall strategy of selecting priority conservation areas worldwide primarily on the basis of richness of endemic plants. To face the global decline of insect herbivores, one must expand the coverage of the network of protected areas and improve the richness of native plants on private lands.
NASA Astrophysics Data System (ADS)
Shevyrin, A. A.; Pogosov, A. G.; Budantsev, M. V.; Bakarov, A. K.; Toropov, A. I.; Ishutkin, S. V.; Shesterikov, E. V.; Kozhukhov, A. S.; Kosolobov, S. S.; Gavrilova, T. A.
2012-12-01
Mechanical stresses are investigated in suspended nanowires made on the basis of GaAs/AlGaAs heterostructures. Though there are no intentionally introduced stressor layers in the heterostructure, the nanowires are subject to Euler buckling instability. In the wide nanowires, the out-of-plane buckling is observed at length significantly smaller (3 times) than the theoretically estimated critical value, while in the narrow nanowires, the experimentally measured critical length of the in-plane buckling coincides with the theoretical estimation. The possible reasons for the obtained discrepancy are considered. The observed peculiarities should be taken into account in the fabrication of nanomechanical and nanoelectromechanical systems.
An algorithm for the basis of the finite Fourier transform
NASA Technical Reports Server (NTRS)
Santhanam, Thalanayar S.
1995-01-01
The Finite Fourier Transformation matrix (F.F.T.) plays a central role in the formulation of quantum mechanics in a finite dimensional space studied by the author over the past couple of decades. An outstanding problem which still remains open is to find a complete basis for F.F.T. In this paper we suggest a simple algorithm to find the eigenvectors of F.T.T.
Klener, Pavel; Fronkova, Eva; Belada, David; Forsterova, Kristina; Pytlik, Robert; Kalinova, Marketa; Simkovic, Martin; Salek, David; Mocikova, Heidi; Prochazka, Vit; Blahovcova, Petra; Janikova, Andrea; Markova, Jana; Obr, Ales; Berkova, Adela; Kubinyi, Jozef; Vaskova, Martina; Mejstrikova, Ester; Campr, Vit; Jaksa, Radek; Kodet, Roman; Michalova, Kyra; Trka, Jan; Trneny, Marek
2018-02-01
Implementation of cytarabine into induction therapy became standard of care for younger patients with mantle cell lymphoma (MCL). On the basis of its beneficial impact, many centers incorporated cytarabine at lower doses also into first-line treatments of elderly patients. We conducted a multicenter observational study that prospectively analyzed safety and efficacy of alternating 3 + 3 cycles of R-CHOP and R-cytarabine for newly diagnosed transplant-ineligible MCL patients. A total of 73 patients were enrolled with median age 70 years. Most patients had intermediate (39.7%) and high-risk (50.7%) disease according to MCL international prognostic index. Rituximab maintenance was initiated in 58 patients. Overall response rate reached 89% by positron emission tomography-computed tomography, including 75.3% complete remissions. Two patients (2.7%) did not complete the induction therapy because of toxicity. Three patients (4.1%) were considered nonresponders, which led to therapy change before completion of induction. Estimated progression-free survival and overall survival were 51.3% and 68.6% at 4 years, respectively. Mantle cell lymphoma international prognostic index, bulky disease (≥ 5 cm), and achievement of positron emission tomography-negativity independently correlated with progression-free survival. Grade 3 to 4 hematologic and nonhematologic toxicity was documented in 48% and 20.5% patients, respectively. Alternation of R-CHOP and R-cytarabine represents feasible and very effective regimen for elderly/comorbid MCL patients. This study was registered at GovTrial (clinicaltrials.gov) NCT03054883. Copyright © 2017 John Wiley & Sons, Ltd.
Zhao, Chao; Zhang, Honghai; Liu, Guangshuai; Yang, Xiufeng; Zhang, Jin
2016-02-01
Canidae is a family of carnivores comprises about 36 extant species that have been defined as three distinct monophyletic groups based on multi-gene data sets. The Tibetan fox (Vulpes ferrilata) is a member of the family Canidae that is endemic to the Tibetan Plateau and has seldom been in the focus of phylogenetic analyses. To clarify the phylogenic relationship of V. ferrilata between other canids, we sequenced the mitochondrial genome and firstly attempted to clarify the relative phylogenetic position of V. ferrilata in canids using the complete mitochondrial genome data. The mitochondrial genome of the Tibetan fox was 16,667 bp, including 37 genes (13 protein-coding genes, 2 rRNA, and 22 tRNA) and a control region. A comparison analysis among the sequenced data of canids indicated that they shared a similar arrangement, codon usage, and other aspects. A phylogenetic analysis on the basis of the nearly complete mtDNA genomes of canids agreed with three monophyletic clades, and the Tibetan fox was highly supported as a sister group of the corsac fox within Vulpes. The estimation of the divergence time suggested a recent split between the Tibetan fox and the corsac fox and rapid evolution in canids. There was no genetic evidence for positive selection related to high-altitude adaption for the Tibetan fox in mtDNA and following studies should pay more attention to the detection of positive signals in nuclear genes involved in energy and oxygen metabolisms. Copyright © 2015 Académie des sciences. Published by Elsevier SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, C.; Box, J. E.; Hock, R. M.; Cogley, J. G.
2011-12-01
Current estimates of global Mountain Glacier and Ice Caps (MG&IC) mass changes are subject to large uncertainties due to incomplete inventories and uncertainties in land surface classification. This presentation features mitigative efforts through the creation of a MODIS dependent land ice classification system and its application for glacier inventory. Estimates of total area of mountain glaciers [IPCC, 2007] and ice caps (including those in Greenland and Antarctica) vary 15%, that is, 680 - 785 10e3 sq. km. To date only an estimated 40% of glaciers (by area) is inventoried in the World Glacier Inventory (WGI) and made available through the World Glacier Monitoring System (WGMS) and the National Snow and Ice Data Center [NSIDC, 1999]. Cogley [2009] recently compiled a more complete version of WGI, called WGI-XF, containing records for just over 131,000 glaciers, covering approximately half of the estimated global MG&IC area. The glaciers isolated from the conterminous Antarctic and Greenland ice sheets remain incompletely inventoried in WGI-XF but have been estimated to contribute 35% to the MG&IC sea-level equivalent during 1961-2004 [Hock et al., 2009]. Together with Arctic Canada and Alaska these regions alone make up almost 90% of the area that is missing in the global WGI-XF inventory. Global mass balance projections tend to exclude ice masses in Greenland and Antarctica due to the paucity of data with respect to basic inventory base data such as area, number of glaciers or size distributions. We address the need for an accurate Greenland and Antarctic peninsula land surface classification with a novel glacier surface classification and inventory based on NASA Moderate Resolution Imaging Spectroradiometer (MODIS) data gridded at 250 m pixel resolution. The presentation includes a sensitivity analysis for surface mass balance as it depends on the land surface classification. Works Cited +Cogley, J. G. (2009), A more complete version of the World Glacier Inventory, Ann. Glaciol. 50(53). +Hock, R., M. de Woul, V. Radi and M. Dyurgerov, 2009. Mountain glaciers and ice caps around Antarctica make a large sea-level rise contribution. Geophys. Res. Lett. 36, L07501, doi:10.1029/2008GL037020. +IPCC, Climate Change 2007 The Physical Science Basis, 2007. Contribution of working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (eds. Solomon, S. et al.) Cambridge University Press, Cambridge, UK.
Scott, Richard B; Eccles, Fiona; Lloyd, Andrew; Carpenter, Katherine
2008-01-01
Background The neuropsychological arm of the International Subarachnoid Aneurysm Trial (N-ISAT) evaluated the cognitive outcome of 573 patients at 12 months following subarachnoid haemorrhage (SAH). The assessment included 29 psychometric measures, yielding a substantial and complex body of data. We have explored alternative and optimal methodologies for analysing and summarising these data to enable the estimation of a cognitive complication rate (CCR). Any differences in cognitive outcome between the two arms of the trial are not however reported here. Methods All individual test scores were transformed into z-scores and a 5th percentile cut-off for impairment was established. A principal components analysis (PCA) was applied to these data to mathematically transform correlated test scores into a smaller number of uncorrelated principal components, or cognitive 'domains'. These domains formed the basis for grouping and weighting individual patients' impaired scores on individual measures. In order to increase the sample size, a series of methods for handling missing data were applied. Results We estimated a 34.1% CCR in all those patients seen face-to-face, rising to 37.4% CCR with the inclusion of patients who were unable to attend assessment for reason related to the index SAH. This group demonstrated significantly more self and carer/relative rated disability on a Health Related Quality of Life questionnaire, than patients classified as having no functionally significant cognitive deficits. Conclusion Evaluating neuropsychological outcome in a large RCT involves unique methodological and organizational challenges. We have demonstrated how these problems may be addressed by re-classifying interval data from 29 measures into a dichotomous CCR. We have presented a 'sliding scale' of undifferentiated individual cognitive impairments, and then on the basis of PCA-derived cognitive 'domains', included consideration of the distribution of impairments in these terms. In order to maximize sample size we have suggested ways for patients who did not complete the entire protocol to be included in the overall CCR. ISAT trial registration ISRCTN49866681 PMID:18341689
Near-Earth-object survey progress and population of small near-Earth asteroids
NASA Astrophysics Data System (ADS)
Harris, A.
2014-07-01
Estimating the total population vs. size of NEAs and the completion of surveys is the same thing since the total population is just the number discovered divided by the estimated completion. I review the method of completion estimation based on ratio of re-detected objects to total detections (known plus new discoveries). The method is quite general and can be used for population estimations of all sorts, from wildlife to various classes of solar system bodies. Since 2001, I have been making estimates of population and survey progress approximately every two years. Plotted below, left, is my latest estimate, including NEA discoveries up to August, 2012. I plan to present an update at the meeting. All asteroids of a given size are not equally easy to detect because of specific orbital geometries. Thus a model of the orbital distribution is necessary, and computer simulations using those orbits need to establish the relation between the raw re-detection ratio and the actual completion fraction. This can be done for any sub-group population, allowing to estimate the population of a subgroup and the expected current completion. Once a reliable survey computer model has been developed and ''calibrated'' with respect to actual survey re-detections versus size, it can be extrapolated to smaller sizes to estimate completion even at very small size where re-detections are rare or even zero. I have recently investigated the subgroup of extremely low encounter velocity NEAs, the class of interest for the Asteroid Redirect Mission (ARM), recently proposed by NASA. I found that asteroids of diameter ˜ 10 m with encounter velocity with the Earth lower than 2.5 km/sec are detected by current surveys nearly 1,000 times more efficiently than the general background of NEAs of that size. Thus the current completion of these slow relative velocity objects may be around 1%, compared to 10^{-6} for that size objects of the general velocity distribution. Current surveys are nowhere near complete, but there may be fewer such objects than have been suggested. This conclusion is reinforced by the fact that at least a couple such discovered objects are known to be not real asteroids but spent rocket bodies in heliocentric orbit, of which there are only of the order of a hundred. Brown et al. (Nature 503, 238-241, 2013, below right, green squares are a re-plot of my blue circles on left plot) recently suggested that the population of small NEAs in the size range from roughly 5 to 50 meters in diameter may have been substantially under-estimated. To be sure, the greatest uncertainty in population estimates is in that range, since there are very few bolide events to use for estimation, and the surveys are extremely incomplete in that size range, so a factor of 3 or so discrepancy is not significant. However, the population estimated from surveys carried still smaller, where the bolide frequency becomes more secure, disagrees from the bolide estimate by even less than a factor of 3 and in fact intersects at about 3 m diameter. On the other hand, the shallow-sloping size-frequency distribution derived from the sparse large bolide data diverges badly from the survey estimates, in sizes where the survey estimates become ever-increasingly reliable, even by 100-200 m diameter. It appears that the bolide data provides a good "anchor" of the population in the size range up to about 5 m diameter, but above that one might do better just connecting that population with a straight line (on a log-log plot) with the survey-determined population at larger size, 50-100 m diameter or so.
NASA Technical Reports Server (NTRS)
Lindh, Roland; Lee, Timothy J.; Bernhardsson, Anders; Persson, B. Joakim; Karlstroem, Gunnar; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The autoaromatization of (Z)-hex-3-ene-1,5-diyne to the singlet biradical para-benzyne has been reinvestigated by state of the art ab initio methods. Previous CCSD(T)/6-31G(d,p) and CASPT2[0]/ANO[C(5s4p2d1f)/H(3s2p)] calculations estimated the the reaction heat at 298 K to be 8-10 and 4.9 plus or minus 3.2 kcal/mol, respectively. Recent NO- and oxygen-dependent trapping experiments and collision-induced dissociation threshold energy experiments estimate the heat of reaction to be 8.5 plus or minus 1.0 at 470 K (recomputed to 9.5 plus or minus 1.0 at 298 K) and 8.4 plus or minus 3.0 kcal/mol at 298 K, respectively. New theoretical estimates at 298 K predict the values at the basis set limit for the CCSD(T) and CASPT2(g1) methods to be 12.7 plus or minus 2.0 and 5.4 plus or minus 2.0 kcal/mol, respectively. The experimentally predicted electronic contribution to the heat of activation is 28.6 kcal/mol. This can be compared with 25.5 and 29.8 kcal/mol from the CASPT2[g1] and the CCSD(T) methods, respectively. The new study has in particular improved on the one-particle basis set for the CCSD(T) method as compared to earlier studies. For the CASPT2 investigation the better suited CASPT2[g1] approximation is utilized. The original CASPT2 method, CASPT2[0], systematically favors open shell systems relative to closed shell systems. This was previously corrected empirically. The study shows that the energy difference between CCSD(T) and CASPT2[g1] at the basis set limit is estimated to be 7 plus or minus 2 kcal/mol. The study also demonstrates that the estimated heat of reaction is very sensitive to the quality of the basis set.
NASA Technical Reports Server (NTRS)
Boland, J. S., III
1975-01-01
A general simulation program is presented (GSP) involving nonlinear state estimation for space vehicle flight navigation systems. A complete explanation of the iterative guidance mode guidance law, derivation of the dynamics, coordinate frames, and state estimation routines are given so as to fully clarify the assumptions and approximations involved so that simulation results can be placed in their proper perspective. A complete set of computer acronyms and their definitions as well as explanations of the subroutines used in the GSP simulator are included. To facilitate input/output, a complete set of compatable numbers, with units, are included to aid in data development. Format specifications, output data phrase meanings and purposes, and computer card data input are clearly spelled out. A large number of simulation and analytical studies were used to determine the validity of the simulator itself as well as various data runs.
WE-D-BRF-05: Quantitative Dual-Energy CT Imaging for Proton Stopping Power Computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, D; Williamson, J; Siebers, J
2014-06-15
Purpose: To extend the two-parameter separable basis-vector model (BVM) to estimation of proton stopping power from dual-energy CT (DECT) imaging. Methods: BVM assumes that the photon cross sections of any unknown material can be represented as a linear combination of the corresponding quantities for two bracketing basis materials. We show that both the electron density (ρe) and mean excitation energy (Iex) can be modeled by BVM, enabling stopping power to be estimated from the Bethe-Bloch equation. We have implemented an idealized post-processing dual energy imaging (pDECT) simulation consisting of monogenetic 45 keV and 80 keV scanning beams with polystyrene-water andmore » water-CaCl2 solution basis pairs for soft tissues and bony tissues, respectively. The coefficients of 24 standard ICRU tissue compositions were estimated by pDECT. The corresponding ρe, Iex, and stopping power tables were evaluated via BVM and compared to tabulated ICRU 44 reference values. Results: BVM-based pDECT was found to estimate ρe and Iex with average and maximum errors of 0.5% and 2%, respectively, for the 24 tissues. Proton stopping power values at 175 MeV, show average/maximum errors of 0.8%/1.4%. For adipose, muscle and bone, these errors result range prediction accuracies less than 1%. Conclusion: A new two-parameter separable DECT model (BVM) for estimating proton stopping power was developed. Compared to competing parametric fit DECT models, BVM has the comparable prediction accuracy without necessitating iterative solution of nonlinear equations or a sample-dependent empirical relationship between effective atomic number and Iex. Based on the proton BVM, an efficient iterative statistical DECT reconstruction model is under development.« less
Genetic basis of between-individual and within-individual variance of docility.
Martin, J G A; Pirotta, E; Petelle, M B; Blumstein, D T
2017-04-01
Between-individual variation in phenotypes within a population is the basis of evolution. However, evolutionary and behavioural ecologists have mainly focused on estimating between-individual variance in mean trait and neglected variation in within-individual variance, or predictability of a trait. In fact, an important assumption of mixed-effects models used to estimate between-individual variance in mean traits is that within-individual residual variance (predictability) is identical across individuals. Individual heterogeneity in the predictability of behaviours is a potentially important effect but rarely estimated and accounted for. We used 11 389 measures of docility behaviour from 1576 yellow-bellied marmots (Marmota flaviventris) to estimate between-individual variation in both mean docility and its predictability. We then implemented a double hierarchical animal model to decompose the variances of both mean trait and predictability into their environmental and genetic components. We found that individuals differed both in their docility and in their predictability of docility with a negative phenotypic covariance. We also found significant genetic variance for both mean docility and its predictability but no genetic covariance between the two. This analysis is one of the first to estimate the genetic basis of both mean trait and within-individual variance in a wild population. Our results indicate that equal within-individual variance should not be assumed. We demonstrate the evolutionary importance of the variation in the predictability of docility and illustrate potential bias in models ignoring variation in predictability. We conclude that the variability in the predictability of a trait should not be ignored, and present a coherent approach for its quantification. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.
Link Between Deployment Factors and Parenting Stress in Navy Families
2016-04-11
eligible participants completed an electronic survey which consisted of demographic information, and eight validated psychosocial scales. Sample: The...military personnel and their families on a daily basis: nurses can identify families at risk and intervene early to prevent harm to the family. 15...variable was parenting stress. Methods: All eligible participants completed an electronic survey which consisted of demographic information, and
Xiao, Sa; Paldurai, Anandan; Nayak, Baibaswata; Mirande, Armando; Collins, Peter L.
2013-01-01
The complete genome sequence was determined for a highly virulent Newcastle disease virus strain from vaccinated chicken farms in Mexico during outbreaks in 2010. On the basis of phylogenetic analysis this strain was classified into genotype V in the class II cluster that was closely related to Mexican strains that appeared in 2004–2006. PMID:23409252
Cupp, Pamela K; Atwood, Katharine A; Byrnes, Hilary F; Miller, Brenda A; Fongkaew, Warunee; Chamratrithirong, Aphichat; Rhucharoenpornpanich, Orratai; Rosati, Michael J; Chookhare, Warunee
2013-01-01
This article reports on a combined family-based substance abuse and HIV-prevention intervention targeting families with 13-14-year-old children in Bangkok, Thailand. Families (n = 340) were randomly and proportionally selected from 7 districts in Bangkok with half randomly assigned to an experimental or control condition. Families in the intervention condition were exposed to 5 interactive booklets about adolescent substance use and risky sexual behavior. Trained health educators followed up by phone to encourage completion of each booklet. Primary outcomes reported in this article include whether the intervention increased the frequency of parent-child communication in general or about sexual risk taking in particular as well as whether the intervention reduced discomfort discussing sexual issues. The authors also tested to see whether booklet completion was associated with communication outcomes at the 6-month follow-up. Multivariate findings indicate that the intervention had a significant impact on the frequency of general parent-child communication on the basis of child reports. The intervention had a marginal impact on the frequency of parent-child communication about sexual issues on the basis of parent reports. Booklet completion was associated with reduced discomfort discussing sex and was marginally associated with frequency of parent-child discussion of sex on the basis of parent reports only. These findings indicate that a family-based program can influence communication patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lelic, Muhidin; Avramovic, Bozidar; Jiang, Tony
The objective of this project was to demonstrate functionality and performance of a Direct Non-iterative State Estimator (DNSE) integrated with NYPA’s Energy Management System (EMS) and with enhanced Real Time Dynamics Monitoring System (RTDMS) synchrophasor platform from Electric Power Group (EPG). DNSE is designed to overcome a major obstacle to operational use of Synchro-Phasor Management Systems (SPMS) by providing to synchrophasor management systems (SPMS) applications a consistent and a complete synchrophasor data foundation in the same way that a traditional EMS State Estimator (SE) provides to EMS applications. Specifically, DNSE is designed to use synchrophasor measurements collected by a centralmore » PDC, Supervisory Control and Data Acquisition (SCADA) measurements, and Energy Management System (EMS) network model, to obtain the complete state of the utility’s operating model at rates that are close to the synchrophasor data rates. In this way, the system is comprehensive in that it does not only cover the part of the network that is “visible” via synchrophasors, but also the part that is only “visible” through the SCADA measurements. Visualization needs associated with the use of DNSE results are fulfilled through suitably enhanced Real Time Dynamics Monitoring System (RTDMS), with the enhancements implemented by EPG. This project had the following goals in mind: To advance the deployment of commercial grade DNSE software application that relies on synchrophasor and SCADA data ; Apply DNSE at other utilities, to address a generic and fundamental need for “clean” operational data for synchrophasor applications; Provide means for “live” estimated data access by control system operators; Enhance potential for situational awareness through full system operational model coverage; Sub-second execution rate of the Direct Non-iterative State Estimator, eventually at a near-phasor data rate execution speed, i.e. < 0.1 sec. Anticipated benefits from this projects are: Enhanced reliability and improvements in the economic efficiency of bulk power system planning and operations; Providing “clean” data to other synchrophasor applications; Enhancement of situational awareness by providing the full operational model updated at near synchrophasor rate; A production-grade software tool that incorporate synchrophasor and SCADA data; Provides a basis for development of next generation monitoring and control applications, based on both SCADA and PMU data. Quanta Technology (QT) team worked in collaboration with Electric Power Group (EPG) whose team has enhanced its commercial Real Time Dynamics Monitoring System (RTDMS) to accommodate the requirements posed by DNSE application. EPG also provided its ePDC and Model-less Data Conditioning (PDVC) software for integration with DNSE+. QT developed the system requirements for DNSE; developed system architecture and defined interfaces between internal DNSE components. The core DNSE algorithm with all surrounding interfaces was named DNSE+. Since the DNSE development was done in a simulated system environment, QT used its PMU simulator that was enhanced during this project for development and factory acceptance testing (FAT). SCADA data in this stage was simulated by commercial PSS/e software. The output of DNSE are estimates of System states in C37.118-2 format, sent to RTDMS for further processing and display. As the number of these states is large, it was necessary to expand the C37.111-2 standard to accommodate large data sets. This enhancement was implemented in RTDMS. The demonstration of pre-production DNSE technology was done at NYPA using streaming field data from NYPA PMUs and from its RTUs through their SCADA system. NYPA provided ICCP interface as well as Common Information Model (CIM). The relevance of the DNSE+ application is that it provides state estimation of the power system based on hybrid set of data, consisting of both available PMU data and SCADA measurements. As this is a direct, non-iterative method of calculation of the system states, if does not suffer from convergence issues which is potential problem for conventional state estimators. Also, it can take any available PMU measurements, so it does not need to have a high percentage of PMU coverage needed in the case of Linear State Estimator. As the DNSE calculates synchrophasors of the system states (both phase and absolute value) as sub-second rate, this application can provide a basis for development of next generation of applications based both on SCADA and PMU data.« less
A Dynamical Model of Pitch Memory Provides an Improved Basis for Implied Harmony Estimation.
Kim, Ji Chul
2017-01-01
Tonal melody can imply vertical harmony through a sequence of tones. Current methods for automatic chord estimation commonly use chroma-based features extracted from audio signals. However, the implied harmony of unaccompanied melodies can be difficult to estimate on the basis of chroma content in the presence of frequent nonchord tones. Here we present a novel approach to automatic chord estimation based on the human perception of pitch sequences. We use cohesion and inhibition between pitches in auditory short-term memory to differentiate chord tones and nonchord tones in tonal melodies. We model short-term pitch memory as a gradient frequency neural network, which is a biologically realistic model of auditory neural processing. The model is a dynamical system consisting of a network of tonotopically tuned nonlinear oscillators driven by audio signals. The oscillators interact with each other through nonlinear resonance and lateral inhibition, and the pattern of oscillatory traces emerging from the interactions is taken as a measure of pitch salience. We test the model with a collection of unaccompanied tonal melodies to evaluate it as a feature extractor for chord estimation. We show that chord tones are selectively enhanced in the response of the model, thereby increasing the accuracy of implied harmony estimation. We also find that, like other existing features for chord estimation, the performance of the model can be improved by using segmented input signals. We discuss possible ways to expand the present model into a full chord estimation system within the dynamical systems framework.
Estimating carrying capacity with simultaneous nutritional constraints.
Thomas A. Hanley; James J. Rogers
1989-01-01
A new procedure is presented for estimating carrying capacity (the number of animals of a given species that can be supported per unit area of habitat) on the basis of two simultaneous nutritional constraints. It requires specifying the quantity (bio-mass) and quality (chemical composition or digestibility) of available food and the nutritional requirements of the...
Short term evaluation of harvesting systems for ecosystem management
Michael D. Erickson; Penn Peters; Curt Hassler
1995-01-01
Continuous time/motion studies have traditionally been the basis for productivity estimates of timber harvesting systems. The detailed data from such studies permits the researcher or analyst to develop mathematical relationships based on stand, system, and stem attributes for describing machine cycle times. The resulting equation(s) allow the analyst to estimate...
Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development
ERIC Educational Resources Information Center
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi
2017-01-01
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
NASA Technical Reports Server (NTRS)
Davis, P. A.; Penn, L. M. (Principal Investigator)
1981-01-01
A technique is developed for the estimation of total daily insolation on the basis of data derivable from operational polar-orbiting satellites. Although surface insolation and meteorological observations are used in the development, the algorithm is constrained in application by the infrequent daytime polar-orbiter coverage.
Assessing the cost of fuel reduction treatments: a critical review
Bob Rummer
2008-01-01
The basic costs of the operations for implementing fuel reduction treatments are used to evaluate treatment effectiveness, select among alternatives, estimate total project costs, and build national program strategies. However, a review of the literature indicates that there is questionable basis for many of the general estimates used to date. Different approaches to...
Carrying Backpacks: Physical Effects
ERIC Educational Resources Information Center
Illinois State Board of Education, 2006
2006-01-01
It is estimated that more than 40 million U.S. youth carry school materials in backs, routinely carrying books, laptop computers, personal and other items used on a daily basis. The Consumer Product Safety Commission (CPSC) estimates that 7,277 emergency visits each year result from injuries related to backpacks. Injury can occur when a child…
Estimating postfire water production in the Pacific Northwest
Donald F. Potts; David L. Peterson; Hans R. Zuuring
1989-01-01
Two hydrologic models were adapted to estimate postfire changer in water yield in Pacific Northwest watersheds. The WRENSS version of the simulation model PROSPER is used for hydrologic regimes dominated by rainfall: it calculates water available for streamflow onthe basis of seasonal precipitation and leaf area index. The WRENSS version of the simulation model WATBAL...
Market projections of cellulose nanomaterial-enabled products- Part 1: Applications
Jo Anne Shatkin; Theodore H. Wegner; E.M. (Ted) Bilek; John Cowie
2014-01-01
Nanocellulose provides a new materials platform for the sustainable production of high-performance nano-enabled products in an array of applications. In this paper, potential applications for cellulose nanomaterials are identified as the first step toward estimating market volume. The overall study, presented in two parts, estimates market volume on the basis of...
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
ERIC Educational Resources Information Center
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Reda, I.
Radiometric data with known and traceable uncertainty is essential for climate change studies to better understand cloud radiation interactions and the earth radiation budget. Further, adopting a known and traceable method of estimating uncertainty with respect to SI ensures that the uncertainty quoted for radiometric measurements can be compared based on documented methods of derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expressionmore » of Uncertainty in Measurement (GUM). derivation.Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This poster provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements from radiometers. The approach follows the Guide to the Expression of Uncertainty in Measurement (GUM).« less
NASA Astrophysics Data System (ADS)
Florian, Ehmele; Michael, Kunz
2016-04-01
Several major flood events occurred in Germany in the past 15-20 years especially in the eastern parts along the rivers Elbe and Danube. Examples include the major floods of 2002 and 2013 with an estimated loss of about 2 billion Euros each. The last major flood events in the State of Baden-Württemberg in southwest Germany occurred in the years 1978 and 1993/1994 along the rivers Rhine and Neckar with an estimated total loss of about 150 million Euros (converted) each. Flood hazard originates from a combination of different meteorological, hydrological and hydraulic processes. Currently there is no defined methodology available for evaluating and quantifying the flood hazard and related risk for larger areas or whole river catchments instead of single gauges. In order to estimate the probable maximum loss for higher return periods (e.g. 200 years, PML200), a stochastic model approach is designed since observational data are limited in time and space. In our approach, precipitation is linearly composed of three elements: background precipitation, orographically-induces precipitation, and a convectively-driven part. We use linear theory of orographic precipitation formation for the stochastic precipitation model (SPM), which is based on fundamental statistics of relevant atmospheric variables. For an adequate number of historic flood events, the corresponding atmospheric conditions and parameters are determined in order to calculate a probability density function (pdf) for each variable. This method involves all theoretically possible scenarios which may not have happened, yet. This work is part of the FLORIS-SV (FLOod RISk Sparkassen Versicherung) project and establishes the first step of a complete modelling chain of the flood risk. On the basis of the generated stochastic precipitation event set, hydrological and hydraulic simulations will be performed to estimate discharge and water level. The resulting stochastic flood event set will be used to quantify the flood risk and to estimate probable maximum loss (e.g. PML200) for a given property (buildings, industry) portfolio.
Park, Ji In
2017-01-01
The Global Burden of Disease 2010 and the WHO Global Health Estimates of years lived with disability (YLDs) uses disability-weights obtained from lay health-state descriptions, which cannot fully reflect different disease manifestations, according to severity, treatment, and environment. The aim of this study was to provide population-representative YLDs of noncommunicable diseases and injuries using a prevalence-based approach, with the disability weight measured in subjects with specific diseases or injuries. We included a total of 44969 adults, who completed the EQ-5D questionnaire as participation in the Korea National Health and Nutrition Examination Survey 2007–2014. We estimated the prevalence of each of 40 conditions identified from the noncommunicable diseases and injuries in the WHO list. Modified condition-specific disability-weight was determined from the adjusted mean difference of the EQ-5D index between the condition and reference groups. Condition-specific YLDs were calculated as the condition’s prevalence multiplied by the condition’s disability-weight. All-cause YLDs, estimated as “number of population × (1 − mean score of EQ-5D)” were 2165 thousands in 39044 thousand adults aged ≥20. The combined YLDs for all 40 conditions accounted for 67.6% of all-cause YLDs, and were 1604, 2126, 8749, and 12847 per 100000 young (age 20−59) males, young females, old (age ≥60) males, and old females, respectively. Back pain/osteoarthritis YLDs were exceptionally large (442/40, 864/146, 2037/836, and 4644/3039 per 100000 young males, young females, old males, and old females, respectively). Back pain, osteoarthritis, depression, diabetes, periodontitis, and stroke accounted for 22.3%, 9.1%, 4.6%, 3.3%, 3.2%, and 2.9% of all-cause YLDs, respectively. In conclusion, this estimation of YLDs using prevalence rates and disability-weights measured in a population-representative survey may form the basis for population-level strategies to prevent age-related worsening of disability. PMID:28196151
Ilbäck, N-G; Alzin, M; Jahrl, S; Enghardt-Barbieri, H; Busk, L
2003-02-01
Few sweetener intake studies have been performed on the general population and only one study has been specifically designed to investigate diabetics and children. This report describes a Swedish study on the estimated intake of the artificial sweeteners acesulfame-K, aspartame, cyclamate and saccharin by children (0-15 years) and adult male and female diabetics (types I and II) of various ages (16-90 years). Altogether, 1120 participants were asked to complete a questionnaire about their sweetener intake. The response rate (71%, range 59-78%) was comparable across age and gender groups. The most consumed 'light' foodstuffs were diet soda, cider, fruit syrup, table powder, table tablets, table drops, ice cream, chewing gum, throat lozenges, sweets, yoghurt and vitamin C. The major sources of sweetener intake were beverages and table powder. About 70% of the participants, equally distributed across all age groups, read the manufacturer's specifications of the food products' content. The estimated intakes showed that neither men nor women exceeded the ADI for acesulfame-K; however, using worst-case calculations, high intakes were found in young children (169% of ADI). In general, the aspartame intake was low. Children had the highest estimated (worst case) intake of cyclamate (317% of ADI). Children's estimated intake of saccharin only slightly exceeded the ADI at the 5% level for fruit syrup. Children had an unexpected high intake of tabletop sweeteners, which, in Sweden, is normally based on cyclamate. The study was performed during two winter months when it can be assumed that the intake of sweeteners was lower as compared with during warm, summer months. Thus, the present study probably underestimates the average intake on a yearly basis. However, our worst-case calculations based on maximum permitted levels were performed on each individual sweetener, although exposure is probably relatively evenly distributed among all sweeteners, except for cyclamate containing table sweeteners.
CO2 storage capacity estimation: Methodology and gaps
Bachu, S.; Bonijoly, D.; Bradshaw, J.; Burruss, R.; Holloway, S.; Christensen, N.P.; Mathiassen, O.M.
2007-01-01
Implementation of CO2 capture and geological storage (CCGS) technology at the scale needed to achieve a significant and meaningful reduction in CO2 emissions requires knowledge of the available CO2 storage capacity. CO2 storage capacity assessments may be conducted at various scales-in decreasing order of size and increasing order of resolution: country, basin, regional, local and site-specific. Estimation of the CO2 storage capacity in depleted oil and gas reservoirs is straightforward and is based on recoverable reserves, reservoir properties and in situ CO2 characteristics. In the case of CO2-EOR, the CO2 storage capacity can be roughly evaluated on the basis of worldwide field experience or more accurately through numerical simulations. Determination of the theoretical CO2 storage capacity in coal beds is based on coal thickness and CO2 adsorption isotherms, and recovery and completion factors. Evaluation of the CO2 storage capacity in deep saline aquifers is very complex because four trapping mechanisms that act at different rates are involved and, at times, all mechanisms may be operating simultaneously. The level of detail and resolution required in the data make reliable and accurate estimation of CO2 storage capacity in deep saline aquifers practical only at the local and site-specific scales. This paper follows a previous one on issues and development of standards for CO2 storage capacity estimation, and provides a clear set of definitions and methodologies for the assessment of CO2 storage capacity in geological media. Notwithstanding the defined methodologies suggested for estimating CO2 storage capacity, major challenges lie ahead because of lack of data, particularly for coal beds and deep saline aquifers, lack of knowledge about the coefficients that reduce storage capacity from theoretical to effective and to practical, and lack of knowledge about the interplay between various trapping mechanisms at work in deep saline aquifers. ?? 2007 Elsevier Ltd. All rights reserved.
Menendez, Mariano E; Baker, Dustin K; Oladeji, Lasun O; Fryberger, Charles T; McGwin, Gerald; Ponce, Brent A
2015-12-16
Shoulder disorders are a common cause of disability and pain. The Shoulder Pain and Disability Index (SPADI) is a frequently employed and previously validated measure of shoulder pain and disability. Although the SPADI has high reliability and construct validity, greater differences between individual patients are often observed than would be expected on the basis of diagnosis and pathophysiology alone. This study aims to determine how psychological factors (namely depression, catastrophic thinking, and self-efficacy) affect pain and perceived disability in the shoulder. A cohort of 139 patients completed a sociodemographic survey and elements from the SPADI, Pain Self-Efficacy Questionnaire (PSEQ), Pain Catastrophizing Scale (PCS), and Patient Health Questionnaire Depression Scale (PHQ-2). Bivariate and multivariate analyses were performed to determine the association of psychosocial factors, demographic characteristics, and specific diagnosis with shoulder pain and disability. The SPADI score showed medium correlation with the PCS (r = 0.43; p < 0.001), PHQ-2 (r = 0.39; p < 0.001), and PSEQ (r = -0.45; p < 0.001). Current work status (F = 4.35; p = 0.006) and body mass index (r = 0.27; p = 0.002) were also associated with the SPADI score. In the multivariate analysis, greater catastrophic thinking (estimate, 0.003; p = 0.029), lower self-efficacy (estimate, -0.005; p = 0.001), higher body mass index (estimate, 0.006; p = 0.048), and being disabled (estimate, 0.15; p = 0.017) or retired (estimate, 0.16; p < 0.001) compared with being employed were associated with worse SPADI scores. The primary diagnosis did not have a significant relationship (p > 0.05) with the SPADI. Catastrophic thinking and decreased self-efficacy are associated with greater shoulder pain and disability. Our data support the notion that patient-to-patient variation in symptom intensity and magnitude of disability is more strongly related to psychological distress than to the specific shoulder diagnosis. Copyright © 2015 by The Journal of Bone and Joint Surgery, Incorporated.
Living on the edge: roe deer (Capreolus capreolus) density in the margins of its geographical range.
Valente, Ana M; Fonseca, Carlos; Marques, Tiago A; Santos, João P; Rodrigues, Rogério; Torres, Rita Tinoco
2014-01-01
Over the last decades roe deer (Capreolus capreolus) populations have increased in number and distribution throughout Europe. Such increases have profound impacts on ecosystems, both positive and negative. Therefore monitoring roe deer populations is essential for the appropriate management of this species, in order to achieve a balance between conservation and mitigation of the negative impacts. Despite being required for an effective management plan, the study of roe deer ecology in Portugal is at an early stage, and hence there is still a complete lack of knowledge of roe deer density within its known range. Distance sampling of pellet groups coupled with production and decay rates for pellet groups provided density estimates for roe deer in northeastern Portugal (Lombada National Hunting Area--LNHA, Serra de Montesinho--SM and Serra da Nogueira--SN; LNHA and SM located in Montesinho Natural Park). The estimated roe deer density using a stratified detection function was 1.23/100 ha for LNHA, 4.87/100 ha for SM and 4.25/100 ha in SN, with 95% confidence intervals (CI) of 0.68 to 2.21, 3.08 to 7.71 and 2.25 to 8.03, respectively. For the entire area, the estimated density was about 3.51/100 ha (95% CI - 2.26-5.45). This method can provide estimates of roe deer density, which will ultimately support management decisions. However, effective monitoring should be based on long-term studies that are able to detect population fluctuations. This study represents the initial phase of roe deer monitoring at the edge of its European range and intends to fill the gap in this species ecology, as the gathering of similar data over a number of years will provide the basis for stronger inferences. Monitoring should be continued, although the study area should be increased to evaluate the accuracy of estimates and assess the impact of management actions.
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Reported Energy and Cost Savings from the DOE ESPC Program: FY 2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Slattery, Bob S.
2015-03-01
The objective of this work was to determine the realization rate of energy and cost savings from the Department of Energy’s Energy Savings Performance Contract (ESPC) program based on information reported by the energy services companies (ESCOs) that are carrying out ESPC projects at federal sites. Information was extracted from 156 Measurement and Verification (M&V) reports to determine reported, estimated, and guaranteed cost savings and reported and estimated energy savings for the previous contract year. Because the quality of the reports varied, it was not possible to determine all of these parameters for each project. For all 156 projects, theremore » was sufficient information to compare estimated, reported, and guaranteed cost savings. For this group, the total estimated cost savings for the reporting periods addressed were $210.6 million, total reported cost savings were $215.1 million, and total guaranteed cost savings were $204.5 million. This means that on average: ESPC contractors guaranteed 97% of the estimated cost savings; projects reported achieving 102% of the estimated cost savings; and projects reported achieving 105% of the guaranteed cost savings. For 155 of the projects examined, there was sufficient information to compare estimated and reported energy savings. On the basis of site energy, estimated savings for those projects for the previous year totaled 11.938 million MMBtu, and reported savings were 12.138 million MMBtu, 101.7% of the estimated energy savings. On the basis of source energy, total estimated energy savings for the 155 projects were 19.052 million MMBtu, and reported saving were 19.516 million MMBtu, 102.4% of the estimated energy savings.« less
Hydrocele repair - series (image)
Surgery usually completely corrects the defect, and the long-term prognosis is quite excellent. Hydrocele repair is done on an outpatient basis and recovery is usually brief. Most children can return to ...
The South African Tuberculosis Care Cascade: Estimated Losses and Methodological Challenges.
Naidoo, Pren; Theron, Grant; Rangaka, Molebogeng X; Chihota, Violet N; Vaughan, Louise; Brey, Zameer O; Pillay, Yogan
2017-11-06
While tuberculosis incidence and mortality are declining in South Africa, meeting the goals of the End TB Strategy requires an invigorated programmatic response informed by accurate data. Enumerating the losses at each step in the care cascade enables appropriate targeting of interventions and resources. We estimated the tuberculosis burden; the number and proportion of individuals with tuberculosis who accessed tests, had tuberculosis diagnosed, initiated treatment, and successfully completed treatment for all tuberculosis cases, for those with drug-susceptible tuberculosis (including human immunodeficiency virus (HIV)-coinfected cases) and rifampicin-resistant tuberculosis. Estimates were derived from national electronic tuberculosis register data, laboratory data, and published studies. The overall tuberculosis burden was estimated to be 532005 cases (range, 333760-764480 cases), with successful completion of treatment in 53% of cases. Losses occurred at multiple steps: 5% at test access, 13% at diagnosis, 12% at treatment initiation, and 17% at successful treatment completion. Overall losses were similar among all drug-susceptible cases and those with HIV coinfection (54% and 52%, respectively, successfully completed treatment). Losses were substantially higher among rifampicin- resistant cases, with only 22% successfully completing treatment. Although the vast majority of individuals with tuberculosis engaged the public health system, just over half were successfully treated. Urgent efforts are required to improve implementation of existing policies and protocols to close gaps in tuberculosis diagnosis, treatment initiation, and successful treatment completion. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
A theoretical study of bond selective photochemistry in CH2BrI
NASA Astrophysics Data System (ADS)
Liu, Kun; Zhao, Hongmei; Wang, Caixia; Zhang, Aihua; Ma, Siyu; Li, Zonghe
2005-01-01
Bromoiodomethane photodissociation in the low-lying excited states has been characterized using unrestricted Hartree-Fock, configuration-interaction-singles, and complete active space self-consistent field calculations with the SDB-aug-cc-pVTZ, aug-cc-pVTZ, and 3-21g** basis sets. According to the results of the vertical excited energies and oscillator strengths of these low-lying excited states, bond selectivity is predicted. Subsequently, the minimum energy paths of the first excited singlet state and the third excited state for the dissociation reactions were calculated using the complete active space self-consistent field method with 3-21g** basis set. Good agreement is found between the calculations and experimental data. The relationships of excitations, the electronic structures at Franck-Condon points, and bond selectivity are discussed.
Analytic model of a multi-electron atom
NASA Astrophysics Data System (ADS)
Skoromnik, O. D.; Feranchuk, I. D.; Leonau, A. U.; Keitel, C. H.
2017-12-01
A fully analytical approximation for the observable characteristics of many-electron atoms is developed via a complete and orthonormal hydrogen-like basis with a single-effective charge parameter for all electrons of a given atom. The basis completeness allows us to employ the secondary-quantized representation for the construction of regular perturbation theory, which includes in a natural way correlation effects, converges fast and enables an effective calculation of the subsequent corrections. The hydrogen-like basis set provides a possibility to perform all summations over intermediate states in closed form, including both the discrete and continuous spectra. This is achieved with the help of the decomposition of the multi-particle Green function in a convolution of single-electronic Coulomb Green functions. We demonstrate that our fully analytical zeroth-order approximation describes the whole spectrum of the system, provides accuracy, which is independent of the number of electrons and is important for applications where the Thomas-Fermi model is still utilized. In addition already in second-order perturbation theory our results become comparable with those via a multi-configuration Hartree-Fock approach.
Zhao, Chunyu; Burge, James H
2007-12-24
Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.
Ghosh, Jo Kay C.; Wilhelm, Michelle; Su, Jason; Goldberg, Daniel; Cockburn, Myles; Jerrett, Michael; Ritz, Beate
2012-01-01
Few studies have examined associations of birth outcomes with toxic air pollutants (air toxics) in traffic exhaust. This study included 8,181 term low birth weight (LBW) children and 370,922 term normal-weight children born between January 1, 1995, and December 31, 2006, to women residing within 5 miles (8 km) of an air toxics monitoring station in Los Angeles County, California. Additionally, land-use-based regression (LUR)-modeled estimates of levels of nitric oxide, nitrogen dioxide, and nitrogen oxides were used to assess the influence of small-area variations in traffic pollution. The authors examined associations with term LBW (≥37 weeks’ completed gestation and birth weight <2,500 g) using logistic regression adjusted for maternal age, race/ethnicity, education, parity, infant gestational age, and gestational age squared. Odds of term LBW increased 2%–5% (95% confidence intervals ranged from 1.00 to 1.09) per interquartile-range increase in LUR-modeled estimates and monitoring-based air toxics exposure estimates in the entire pregnancy, the third trimester, and the last month of pregnancy. Models stratified by monitoring station (to investigate air toxics associations based solely on temporal variations) resulted in 2%–5% increased odds per interquartile-range increase in third-trimester benzene, toluene, ethyl benzene, and xylene exposures, with some confidence intervals containing the null value. This analysis highlights the importance of both spatial and temporal contributions to air pollution in epidemiologic birth outcome studies. PMID:22586068
Targeting a Complex Transcriptome: The Construction of the Mouse Full-Length cDNA Encyclopedia
Carninci, Piero; Waki, Kazunori; Shiraki, Toshiyuki; Konno, Hideaki; Shibata, Kazuhiro; Itoh, Masayoshi; Aizawa, Katsunori; Arakawa, Takahiro; Ishii, Yoshiyuki; Sasaki, Daisuke; Bono, Hidemasa; Kondo, Shinji; Sugahara, Yuichi; Saito, Rintaro; Osato, Naoki; Fukuda, Shiro; Sato, Kenjiro; Watahiki, Akira; Hirozane-Kishikawa, Tomoko; Nakamura, Mari; Shibata, Yuko; Yasunishi, Ayako; Kikuchi, Noriko; Yoshiki, Atsushi; Kusakabe, Moriaki; Gustincich, Stefano; Beisel, Kirk; Pavan, William; Aidinis, Vassilis; Nakagawara, Akira; Held, William A.; Iwata, Hiroo; Kono, Tomohiro; Nakauchi, Hiromitsu; Lyons, Paul; Wells, Christine; Hume, David A.; Fagiolini, Michela; Hensch, Takao K.; Brinkmeier, Michelle; Camper, Sally; Hirota, Junji; Mombaerts, Peter; Muramatsu, Masami; Okazaki, Yasushi; Kawai, Jun; Hayashizaki, Yoshihide
2003-01-01
We report the construction of the mouse full-length cDNA encyclopedia,the most extensive view of a complex transcriptome,on the basis of preparing and sequencing 246 libraries. Before cloning,cDNAs were enriched in full-length by Cap-Trapper,and in most cases,aggressively subtracted/normalized. We have produced 1,442,236 successful 3′-end sequences clustered into 171,144 groups, from which 60,770 clones were fully sequenced cDNAs annotated in the FANTOM-2 annotation. We have also produced 547,149 5′ end reads,which clustered into 124,258 groups. Altogether, these cDNAs were further grouped in 70,000 transcriptional units (TU),which represent the best coverage of a transcriptome so far. By monitoring the extent of normalization/subtraction, we define the tentative equivalent coverage (TEC),which was estimated to be equivalent to >12,000,000 ESTs derived from standard libraries. High coverage explains discrepancies between the very large numbers of clusters (and TUs) of this project,which also include non-protein-coding RNAs,and the lower gene number estimation of genome annotations. Altogether,5′-end clusters identify regions that are potential promoters for 8637 known genes and 5′-end clusters suggest the presence of almost 63,000 transcriptional starting points. An estimate of the frequency of polyadenylation signals suggests that at least half of the singletons in the EST set represent real mRNAs. Clones accounting for about half of the predicted TUs await further sequencing. The continued high-discovery rate suggests that the task of transcriptome discovery is not yet complete. PMID:12819125
A Reexamination of the Emergy Input to a System from the ...
The wind energy absorbed in the global boundary layer (GBL, 900 mb surface) is the basis for calculating the wind emergy input for any system on the Earth’s surface. Estimates of the wind emergy input to a system depend on the amount of wind energy dissipated, which can have a range of magnitudes for a given velocity depending on surface drag and atmospheric stability at the location and time period under study. In this study, we develop a method to consider this complexity in estimating the emergy input to a system from the wind. A new calculation of the transformity of the wind energy dissipated in the GBL (900 mb surface) based on general models of atmospheric circulation in the planetary boundary layer (PBL, 100 mb surface) is presented and expressed on the 12.0E+24 seJ y-1 geobiosphere baseline to complete the information needed to calculate the emergy input from the wind to the GBL of any system. The average transformity of wind energy dissipated in the GBL (below 900 mb) was 1241±650 sej J-1. The analysis showed that the transformity of the wind varies over the course of a year such that summer processes may require a different wind transformity than processes occurring with a winter or annual time boundary. This is a paper in the proceedings of Emergy Synthesis 9, thus it will be available online for those interested in this subject. The paper describes a new and more accurate way to estimate the wind energy input to any system. It also has a new cal
NASA Astrophysics Data System (ADS)
Yang, Wan; Kominz, Michelle A.
2003-01-01
The Cisco Group on the Eastern Shelf of the Midland Basin is composed of fluvial, deltaic, shelf, shelf-margin, and slope-to-basin carbonate and siliciclastic rocks. Sedimentologic and stratigraphic analyses of 181 meter-to-decimeter-scale depositional sequences exposed in the up-dip shelf indicated that the siliciclastic and carbonate parasequences in the transgressive systems tracts (TST) are thin and upward deepening, whereas those in highstand systems tracts (HST) are thick and upward shallowing. The sequences can be subdivided into five types on the basis of principal lithofacies, and exhibit variable magnitude of facies shift corresponding to variable extents of marine transgression and regression on the shelf. The sequence stacking patterns and their regional persistence suggest a three-level sequence hierarchy controlled by eustasy, whereas local and regional changes in lithology, thickness, and sequence type, magnitude, and absence were controlled by interplay of eustasy, differential shelf subsidence, depositional topography, and pattern of siliciclastic supply. The outcropping Cisco Group is highly incomplete with an estimated 6-11% stratigraphic completeness. The average duration of deposition of the major (third-order) sequences is estimated as 67-102 ka on the up-dip shelf and increases down dip, while the average duration of the major sequence boundaries (SB) is estimated as 831-1066 ka and decreases down dip. The nondepositional and erosional hiatus on the up-dip shelf was represented by lowstand deltaic systems in the basin and slope.
Rodda, Gordon H.; Dean-Bradley, Kathryn; Campbell, Earl W.; Fritts, Thomas H.; Lardner, Bjorn; Yackel Adams, Amy A.; Reed, Robert N.
2015-01-01
To obtain quantitative information about population dynamics from counts of animals, the per capita detectabilities of each species must remain constant over the course of monitoring. We characterized lizard detection constancy for four species over 17 yr from a single site in northern Guam, a relatively benign situation because detection was relatively easy and we were able to hold constant the site, habitat type, species, season, and sampling method. We monitored two species of diurnal terrestrial skinks (Carlia ailanpalai [Curious Skink], Emoia caeruleocauda [Pacific Bluetailed Skink]) using glueboards placed on the ground in the shade for 3 h on rainless mornings, yielding 10,286 skink captures. We additionally monitored two species of nocturnal arboreal geckos (Hemidactylus frenatus [Common House Gecko]; Lepidodactylus lugubris [Mourning Gecko]) on the basis of 15,212 sightings. We compared these count samples to a series of complete censuses we conducted from four or more total removal plots (everything removed to mineral soil) totaling 400 m2(about 1% of study site) in each of the years 1995, 1999, and 2012, providing time-stamped quantification of detectability for each species. Unfortunately, the actual population trajectories taken by the four species were masked by unexplained variation in detectability. This observation of debilitating latent variability in lizard detectability under nearly ideal conditions undercuts our trust in population estimation techniques that fail to quantify venue-specific detectability, rely on pooled detection probability estimates, or assume that modulation in predefined environmental covariates suffices for estimating detectability.
Ghosh, Jo Kay C; Wilhelm, Michelle; Su, Jason; Goldberg, Daniel; Cockburn, Myles; Jerrett, Michael; Ritz, Beate
2012-06-15
Few studies have examined associations of birth outcomes with toxic air pollutants (air toxics) in traffic exhaust. This study included 8,181 term low birth weight (LBW) children and 370,922 term normal-weight children born between January 1, 1995, and December 31, 2006, to women residing within 5 miles (8 km) of an air toxics monitoring station in Los Angeles County, California. Additionally, land-use-based regression (LUR)-modeled estimates of levels of nitric oxide, nitrogen dioxide, and nitrogen oxides were used to assess the influence of small-area variations in traffic pollution. The authors examined associations with term LBW (≥37 weeks' completed gestation and birth weight <2,500 g) using logistic regression adjusted for maternal age, race/ethnicity, education, parity, infant gestational age, and gestational age squared. Odds of term LBW increased 2%-5% (95% confidence intervals ranged from 1.00 to 1.09) per interquartile-range increase in LUR-modeled estimates and monitoring-based air toxics exposure estimates in the entire pregnancy, the third trimester, and the last month of pregnancy. Models stratified by monitoring station (to investigate air toxics associations based solely on temporal variations) resulted in 2%-5% increased odds per interquartile-range increase in third-trimester benzene, toluene, ethyl benzene, and xylene exposures, with some confidence intervals containing the null value. This analysis highlights the importance of both spatial and temporal contributions to air pollution in epidemiologic birth outcome studies.
Determining bioavailability of food folates in a controlled intervention study.
Hannon-Fletcher, Mary P; Armstrong, Nicola C; Scott, John M; Pentieva, Kristina; Bradbury, Ian; Ward, Mary; Strain, J J; Dunn, Adele A; Molloy, Anne M; Kerr, Maeve A; McNulty, Helene
2004-10-01
The concept of dietary folate equivalents (DFEs) in the United States recognizes the differences in bioavailability between natural food folates and the synthetic vitamin, folic acid. However, many published reports on folate bioavailability are problematic because of several confounding factors. We compared the bioavailability of food folates with that of folic acid under controlled conditions. To broadly represent the extent to which natural folates are conjugated in foods, we used 2 natural sources of folate, spinach (50% polyglutamyl folate) and yeast (100% polyglutamyl folate). Ninety-six men were randomly assigned according to their screening plasma homocysteine (tHcy) concentration to 1 of 4 treatment groups for an intervention period of 30 d. Each subject received (daily under supervision) either a folate-depleted "carrier" meal or a drink plus 1) placebo tablet, 2) 200 microg folic acid in a tablet, 3) 200 microg natural folate provided as spinach, or 4) 200 microg natural folate provided as yeast. Among the subjects who completed the intervention, responses (increase in serum folate, lowering of tHcy) relative to those in the placebo group (n = 18) were significant in the folic acid group (n = 18) but not in the yeast folate (n = 19) or the spinach folate (n = 18) groups. Both natural sources of folate were significantly less bioavailable than was folic acid. Overall estimations of folate bioavailability relative to that of folic acid were found to be between 30% (spinach) and 59% (yeast). Relative bioavailability estimates were consistent with the estimates from the metabolic study that were used as a basis to derive the US DFE value.
A new parallel algorithm of MP2 energy calculations.
Ishimura, Kazuya; Pulay, Peter; Nagase, Shigeru
2006-03-01
A new parallel algorithm has been developed for second-order Møller-Plesset perturbation theory (MP2) energy calculations. Its main projected applications are for large molecules, for instance, for the calculation of dispersion interaction. Tests on a moderate number of processors (2-16) show that the program has high CPU and parallel efficiency. Timings are presented for two relatively large molecules, taxol (C(47)H(51)NO(14)) and luciferin (C(11)H(8)N(2)O(3)S(2)), the former with the 6-31G* and 6-311G** basis sets (1,032 and 1,484 basis functions, 164 correlated orbitals), and the latter with the aug-cc-pVDZ and aug-cc-pVTZ basis sets (530 and 1,198 basis functions, 46 correlated orbitals). An MP2 energy calculation on C(130)H(10) (1,970 basis functions, 265 correlated orbitals) completed in less than 2 h on 128 processors.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Estimating the probability of survival of individual shortleaf pine (Pinus echinata mill.) trees
Sudip Shrestha; Thomas B. Lynch; Difei Zhang; James M. Guldin
2012-01-01
A survival model is needed in a forest growth system which predicts the survival of trees on individual basis or on a stand basis (Gertner, 1989). An individual-tree modeling approach is one of the better methods available for predicting growth and yield as it provides essential information about particular tree species; tree size, tree quality and tree present status...
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Wolfgang, E-mail: Wagner@thermo.rub.de; Thol, Monika
2015-12-15
Over the past several years, considerable scientific and technical interest has been focused on accurate thermodynamic properties of fluid water covering part of the subcooled (metastable) region and the stable liquid from the melting line up to about 300 K and pressures up to several hundred MPa. Between 2000 and 2010, experimental density data were published whose accuracy was not completely clear. The scientific standard equation of state for fluid water, the IAPWS-95 formulation, was developed on the basis of experimental data for thermodynamic properties that were available by 1995. In this work, it is examined how IAPWS-95 behaves withmore » respect to the experimental data published after 1995. This investigation is carried out for temperatures from 250 to 300 K and pressures up to 400 MPa. The starting point is the assessment of the current data situation. This was mainly performed on the basis of data for the density, expansivity, compressibility, and isobaric heat capacity, which were derived in 2015 from very accurate speed-of-sound data. Apart from experimental data and these derived data, property values calculated from the recently published equation of state for this region of Holten et al. (2014) were also used. As a result, the unclear data situation could be clarified, and uncertainty values could be estimated for the investigated properties. In the region described above, detailed comparisons show that IAPWS-95 is able to represent the latest experimental data for the density, expansivity, compressibility, speed of sound, and isobaric heat capacity to within the uncertainties given in the release on IAPWS-95. Since the release does not contain uncertainty estimates for expansivities and compressibilities, the statement relates to the error propagation of the given uncertainty in density. Due to the lack of experimental data for the isobaric heat capacity for pressures above 100 MPa, no uncertainty estimates are given in the release for this pressure range. Results of the investigation of IAPWS-95 concerning its behavior with regard to the isobaric heat capacity in the high-pressure low-temperature region are also presented. Comparisons with very accurate speed-of-sound data published in 2012 showed that the uncertainty estimates of IAPWS-95 in speed of sound could be decreased for temperatures from 283 to 473 K and pressures up to 400 MPa.« less
Cost-effectiveness of a Ceramide-Infused Skin Barrier Versus a Standard Barrier
Berger, Ariel; Inglese, Gary; Skountrianos, George; Karlsmark, Tonny; Oguz, Mustafa
2018-01-01
PURPOSE: To assess the cost-effectiveness of a ceramide-infused skin barrier (CIB) versus other skin barriers (standard of care) among patients who have undergone ostomy creation. DESIGN: Cost-effectiveness analysis, based on a decision-analytic model that was estimated using data from the ADVOCATE (A Study Determining Variances in Ostomy Skin Conditions And The Economic Impact) trial, which investigated stoma-related healthcare costs over 12 weeks among patients who recently underwent fecal ostomy, and from other sources. SUBJECTS AND SETTING: Analysis was based on a hypothetical cohort of 1000 patients who recently underwent fecal ostomy; over a 1-year period, 500 patients were assumed to use CIB and 500 were assumed to use standard of care. METHODS: We adapted a previous economic model to estimate expected 1-year costs and outcomes among persons with a new ostomy assumed to use CIB versus standard of care. Outcomes of interest included peristomal skin complications (PSCs) (up to 2 during the 1-year period of interest) and quality-adjusted life days (QALDs); QALDs vary from 1, indicating a day of perfect health to 0, indicating a day with the lowest possible health (deceased). Subjects were assigned QALDs on a daily basis, with the value of the QALD on any given day based on whether the patient was experiencing a PSC. Costs included those related to skin barriers, ostomy accessories, and care of PSCs. The incremental cost-effectiveness of CIB versus standard of care was estimated as the incremental cost per PSC averted and QALD gained, respectively; net monetary benefit of CIB was also estimated. All analyses were run using the perspective of an Australian payer. RESULTS: On a per-patient basis, use of CIB was expected over a 1-year period to result in 0.16 fewer PSCs, an additional 0.35 QALDs, and a savings of A$180 (Australian dollars, US $137) in healthcare costs all versus standard of care. Management with CIB provided a net monetary benefit (calculated as the product of maximum willingness to pay for 1 QALD times additional QALDs with CIB less the incremental cost of CIB) of A$228 (US $174). Probabilistic sensitivity analysis was also completed; it revealed that 97% of model runs resulted in fewer expected PSCs with CIB; 92% of these runs resulted in lower expected costs with CIB. CONCLUSIONS: Findings suggest that the CIB is a cost-effective skin barrier for persons living with an ostomy. PMID:29438140
Atoms in molecules, an axiomatic approach. I. Maximum transferability
NASA Astrophysics Data System (ADS)
Ayers, Paul W.
2000-12-01
Central to chemistry is the concept of transferability: the idea that atoms and functional groups retain certain characteristic properties in a wide variety of environments. Providing a completely satisfactory mathematical basis for the concept of atoms in molecules, however, has proved difficult. The present article pursues an axiomatic basis for the concept of an atom within a molecule, with particular emphasis devoted to the definition of transferability and the atomic description of Hirshfeld.
Hu, Yu; Chen, Yaping
2017-01-01
Vaccination coverage in Zhejiang province, east China, is evaluated through repeated coverage surveys. The Zhejiang provincial immunization information system (ZJIIS) was established in 2004 with links to all immunization clinics. ZJIIS has become an alternative to quickly assess the vaccination coverage. To assess the current completeness and accuracy on the vaccination coverage derived from ZJIIS, we compared the estimates from ZJIIS with the estimates from the most recent provincial coverage survey in 2014, which combined interview data with verified data from ZJIIS. Of the enrolled 2772 children in the 2014 provincial survey, the proportions of children with vaccination cards and registered in ZJIIS were 94.0% and 87.4%, respectively. Coverage estimates from ZJIIS were systematically higher than the corresponding estimates obtained through the survey, with a mean difference of 4.5%. Of the vaccination doses registered in ZJIIS, 16.7% differed from the date recorded in the corresponding vaccination cards. Under-registration in ZJIIS significantly influenced the coverage estimates derived from ZJIIS. Therefore, periodic coverage surveys currently provide more complete and reliable results than the estimates based on ZJIIS alone. However, further improvement of completeness and accuracy of ZJIIS will likely allow more reliable and timely estimates in future. PMID:28696387
Molecular basis for the Kallmann syndrome-linked fibroblast growth factor receptor mutation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thurman, Ryan D.; Kathir, Karuppanan Muthusamy; Rajalingam, Dakshinamurthy
Highlights: Black-Right-Pointing-Pointer The structural basis of the Kallmann syndrome is elucidated. Black-Right-Pointing-Pointer Kallmann syndrome mutation (A168S) induces a subtle conformational change(s). Black-Right-Pointing-Pointer Structural interactions mediated by beta-sheet G are most perturbed. Black-Right-Pointing-Pointer Ligand (FGF)-receptor interaction(s) is completely abolished by Kallmann mutation. Black-Right-Pointing-Pointer Kallmann mutation directly affects the FGF signaling process. -- Abstract: Kallmann syndrome (KS) is a developmental disease that expresses in patients as hypogonadotropic hypogonadism and anosmia. KS is commonly associated with mutations in the extracellular D2 domain of the fibroblast growth factor receptor (FGFR). In this study, for the first time, the molecular basis for the FGFR associatedmore » KS mutation (A168S) is elucidated using a variety of biophysical experiments, including multidimensional NMR spectroscopy. Secondary and tertiary structural analysis using far UV circular dichroism, fluorescence and limited trypsin digestion assays suggest that the KS mutation induces subtle tertiary structure change in the D2 domain of FGFR. Results of isothermal titration calorimetry experiments show the KS mutation causes a 10-fold decrease in heparin binding affinity and also a complete loss in ligand (FGF-1) binding. {sup 1}H-{sup 15}N chemical perturbation data suggest that complete loss in the ligand (FGF) binding affinity is triggered by a subtle conformational change that disrupts crucial structural interactions in both the heparin and the FGF binding sites in the D2 domain of FGFR. The novel findings reported in this study are expected to provide valuable clues toward a complete understanding of the other genetic diseases linked to mutations in the FGFR.« less
Milivojevic, Ana; Corovic, Marija; Carevic, Milica; ...
2017-09-23
Solubility and stability of flavonoid glycosides, valuable natural constituents of cosmetics and pharmaceuticals, could be improved by lipase-catalyzed acylation. Focus of this study was on development of eco-friendly process for the production of flavonoid acetates. By using phloridzin as model compound and triacetin as acetyl donor and solvent, 100% conversion and high productivity (23.32 g l –1 day –1) were accomplished. Complete conversions of two other glycosylated flavonoids, naringin and esculin, in solvent-free system were achieved, as well. Comprehensive kinetic mechanism based on two consecutive mono-substrate reactions was established where first one represents formation of flavonoid monoacetate and within secondmore » reaction diacetate is being produced from monoacetate. Both steps were regarded as reversible Michaelis-Menten reactions without inhibition. Apparent kinetic parameters for two consecutive reactions (V m constants for substrates and products and K m constants for forward and reverse reactions) were estimated for three examined acetyl acceptors and excellent fitting of experimental data (R 2 > 0.97) was achieved. Obtained results showed that derived kinetic model could be applicable for solvent-free esterifications of different flavonoid glycosides. As a result, it was valid for entire transesterification course (72 h of reaction) which, combined with complete conversions and green character of synthesis, represents firm basis for further process development.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Milivojevic, Ana; Corovic, Marija; Carevic, Milica
Solubility and stability of flavonoid glycosides, valuable natural constituents of cosmetics and pharmaceuticals, could be improved by lipase-catalyzed acylation. Focus of this study was on development of eco-friendly process for the production of flavonoid acetates. By using phloridzin as model compound and triacetin as acetyl donor and solvent, 100% conversion and high productivity (23.32 g l –1 day –1) were accomplished. Complete conversions of two other glycosylated flavonoids, naringin and esculin, in solvent-free system were achieved, as well. Comprehensive kinetic mechanism based on two consecutive mono-substrate reactions was established where first one represents formation of flavonoid monoacetate and within secondmore » reaction diacetate is being produced from monoacetate. Both steps were regarded as reversible Michaelis-Menten reactions without inhibition. Apparent kinetic parameters for two consecutive reactions (V m constants for substrates and products and K m constants for forward and reverse reactions) were estimated for three examined acetyl acceptors and excellent fitting of experimental data (R 2 > 0.97) was achieved. Obtained results showed that derived kinetic model could be applicable for solvent-free esterifications of different flavonoid glycosides. As a result, it was valid for entire transesterification course (72 h of reaction) which, combined with complete conversions and green character of synthesis, represents firm basis for further process development.« less
Self-perception of competencies in adolescents with autism spectrum disorders.
Furlano, Rosaria; Kelley, Elizabeth A; Hall, Layla; Wilson, Daryl E
2015-12-01
Research has demonstrated that, despite difficulties in multiple domains, children with autism spectrum disorders (ASD) show a lack of awareness of these difficulties. A misunderstanding of poor competencies may make it difficult for individuals to adjust their behaviour in accordance with feedback and may lead to greater impairments over time. This study examined self-perceptions of adolescents with ASD (n = 19) and typically developing (TD) mental-age-matched controls (n = 22) using actual performance on objective academic tasks as the basis for ratings. Before completing the tasks, participants were asked how well they thought they would do (pre-task prediction). After completing each task, they were asked how well they thought they did (immediate post-performance) and how well they would do in the future (hypothetical future post-performance). Adolescents with ASD had more positively biased self-perceptions of competence than TD controls. The ASD group tended to overestimate their performance on all ratings of self-perceptions (pre-task prediction, immediate, and hypothetical future post-performance). In contrast, while the TD group was quite accurate at estimating their performance immediately before and after performing the task, they showed some tendency to overestimate their future performance. Future investigation is needed to systematically examine possible mechanisms that may be contributing to these biased self-perceptions. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
McLendon, Michael K.; Tuchmayer, Jeremy B.; Park, Toby J.
2010-01-01
This article reports the findings of an exploratory analysis of state policy climates for college student persistence and completion. We performed an analysis of more than 100 documents collected from 8 states chosen largely on the basis of their performance on past "Measuring Up" reports. Our analysis of governors' state-of-the-state…
Theoretical study of the XP3 (X = Al, B, Ga) clusters
NASA Astrophysics Data System (ADS)
Ueno, Leonardo T.; Lopes, Cinara; Malaspina, Thaciana; Roberto-Neto, Orlando; Canuto, Sylvio; Machado, Francisco B. C.
2012-05-01
The lowest singlet and triplet states of AlP3, GaP3 and BP3 molecules with Cs, C2v and C3v symmetries were characterized using the B3LYP functional and the aug-cc-pVTZ and aug-cc-pVQZ correlated consistent basis sets. Geometrical parameters and vibrational frequencies were calculated and compared to existent experimental and theoretical data. Relative energies were obtained with single point CCSD(T) calculations using the aug-cc-pVTZ, aug-cc-pVQZ and aug-cc-pV5Z basis sets, and then extrapolating to the complete basis set (CBS) limit.
Community Health Risk Assessment of Primary Aluminum Smelter Emissions
Larivière, Claude
2014-01-01
Objective: Primary aluminum production is an industrial process with high potential health risk for workers. We consider in this article how to assess community health risks associated with primary aluminum smelter emissions. Methods: We reviewed the literature on health effects, community exposure data, and dose–response relationships of the principal hazardous agents emitted. Results: On the basis of representative measured community exposure levels, we were able to make rough estimates on health risks associated with specific agents and categorize these as none, low, medium, or high. Conclusions: It is possible to undertake a rough-estimate community Health Risk Assessment for individual smelters on the basis of information available in the epidemiological literature and local community exposure data. PMID:24806724
A complex valued radial basis function network for equalization of fast time varying channels.
Gan, Q; Saratchandran, P; Sundararajan, N; Subramanian, K R
1999-01-01
This paper presents a complex valued radial basis function (RBF) network for equalization of fast time varying channels. A new method for calculating the centers of the RBF network is given. The method allows fixing the number of RBF centers even as the equalizer order is increased so that a good performance is obtained by a high-order RBF equalizer with small number of centers. Simulations are performed on time varying channels using a Rayleigh fading channel model to compare the performance of our RBF with an adaptive maximum-likelihood sequence estimator (MLSE) consisting of a channel estimator and a MLSE implemented by the Viterbi algorithm. The results show that the RBF equalizer produces superior performance with less computational complexity.
Systematics of ground state multiplets of atomic nuclei in the delta-interaction approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imasheva, L. T.; Ishkhanov, B. S.; Stepanov, M. E., E-mail: stepanov@depni.sinp.msu.ru
2015-12-15
Pairing forces between nucleons in an atomic nucleus strongly influence its structure. One of the manifestations of pair interaction is the ground state multiplet (GSM) formation in the spectrum of low-lying excited states of even–even nuclei. The value of GSM splitting is determined by the value of pair interaction of nucleons; for each isotope, it can be estimated on the basis of experimental nuclear masses. The quality of this estimate is characterized by the degree of reproduction of GSM levels in the nucleus. The GSM systematics in even–even nuclei with a pair of identical nucleons in addition to the filledmore » nuclear core is considered on the basis of delta interaction.« less
40 CFR 71.4 - Program implementation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... effective date. (3) Any complete permit application containing an early reduction demonstration under... under this paragraph (m) and of the approval or rejection of such petition and the basis for such action...
40 CFR 71.4 - Program implementation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... effective date. (3) Any complete permit application containing an early reduction demonstration under... under this paragraph (m) and of the approval or rejection of such petition and the basis for such action...
O'Shea, Thomas J.; Langtimm, Catherine A.; O'Shea, Thomas J.; Ackerman, B.B.; Percival, H. Franklin
1995-01-01
We applied Cormack-Jolly-Seber open population models to manatee (Trichechus manatus latirostris) photo-identification databases to estimate adult survival probabilities. The computer programs JOLLY and RECAPCO were used to estimate survival of 677 individuals in three study areas: Crystal River (winters 1977-78 to 1990-91), Blue Spring (winters 1977-78 to 1990-91), and the Atlantic Coast (winters 1984-85 to 1990-91). We also estimated annual survival from observations of 111 manatees tagged for studies with radiotelemetry. Survival estimated from observations with telemetry had broader confidence intervals than survival estimated with the Cormack-Jolly-Seber models. Annual probabilities of capture based on photo-identification records were generally high. The mean annual adult survival estimated from sighting-resighting records was 0.959-0.962 in the Crystal River and 0.936-0.948 at Blue Spring and may be high enough to permit population growth, given the values of other life-history parameters. On the Atlantic Coast, the estimated annual adult survival (range of means = 0.877-0.885) may signify a declining population. However, for several reasons, interpretation of data from the latter study group should be tempered with caution. Adult survivorship seems to be constant with age in all three study groups. No strong differences were apparent between adult survival ofmales and females in the Crystal River or at Blue Spring; the basis of significant differences between sexes on the Atlantic Coast is unclear. Future research into estimating survival with photo-identification and the Cormack-Jolly-Seber models should be vigorously pursued. Estimates of annual survival can provide an additional indication of Florida manatee population status with a stronger statistical basis than aerial counts and carcass totals.
Griscom, Bronson W; Ellis, Peter W; Baccini, Alessandro; Marthinus, Delon; Evans, Jeffrey S; Ruslandi
2016-01-01
Forest conservation efforts are increasingly being implemented at the scale of sub-national jurisdictions in order to mitigate global climate change and provide other ecosystem services. We see an urgent need for robust estimates of historic forest carbon emissions at this scale, as the basis for credible measures of climate and other benefits achieved. Despite the arrival of a new generation of global datasets on forest area change and biomass, confusion remains about how to produce credible jurisdictional estimates of forest emissions. We demonstrate a method for estimating the relevant historic forest carbon fluxes within the Regency of Berau in eastern Borneo, Indonesia. Our method integrates best available global and local datasets, and includes a comprehensive analysis of uncertainty at the regency scale. We find that Berau generated 8.91 ± 1.99 million tonnes of net CO2 emissions per year during 2000-2010. Berau is an early frontier landscape where gross emissions are 12 times higher than gross sequestration. Yet most (85%) of Berau's original forests are still standing. The majority of net emissions were due to conversion of native forests to unspecified agriculture (43% of total), oil palm (28%), and fiber plantations (9%). Most of the remainder was due to legal commercial selective logging (17%). Our overall uncertainty estimate offers an independent basis for assessing three other estimates for Berau. Two other estimates were above the upper end of our uncertainty range. We emphasize the importance of including an uncertainty range for all parameters of the emissions equation to generate a comprehensive uncertainty estimate-which has not been done before. We believe comprehensive estimates of carbon flux uncertainty are increasingly important as national and international institutions are challenged with comparing alternative estimates and identifying a credible range of historic emissions values.
2013 Cost of Wind Energy Review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mone, C.; Smith, A.; Maples, B.
2015-02-01
This report uses representative project types to estimate the levelized cost of wind energy (LCOE) in the United States for 2013. Scheduled to be published on an annual basis, it relies on both market and modeled data to maintain a current understanding of wind generation cost trends and drivers. It is intended to provide insight into current component-level costs and a basis for understanding current component-level costs and a basis for understanding variability in the LCOE across the industry. Data and tools developed from this analysis are used to inform wind technology cost projections, goals, and improvement opportunities.
Methods of Predicting Solid Waste Characteristics.
ERIC Educational Resources Information Center
Boyd, Gail B.; Hawkins, Myron B.
The project summarized by this report involved a preliminary design of a model for estimating and predicting the quantity and composition of solid waste and a determination of its feasibility. The novelty of the prediction model is that it estimates and predicts on the basis of knowledge of materials and quantities before they become a part of the…
Comparisons of modeled height predictions to ocular height estimates
W.A. Bechtold; S.J. Zarnoch; W.G. Burkman
1998-01-01
Equations used by USDA Forest Service Forest Inventory and Analysis projects to predict individual tree heights on the basis of species and d.b.h. were improved by the addition of mean overstory height. However, ocular estimates of total height by field crews were more accurate than the statistically improved models, especially for hardwood species. Height predictions...
Optimum Selection Age for Wood Density in Loblolly Pine
D.P. Gwaze; K.J. Harding; R.C. Purnell; Floyd E. Brigwater
2002-01-01
Genetic and phenotypic parameters for core wood density of Pinus taeda L. were estimated for ages ranging from 5 to 25 years at two sites in southern United States. Heritability estimates on an individual-tree basis for core density were lower than expected (0.20-0.31). Age-age genetic correlations were higher than phenotypic correlations,...
The Instinct Fallacy: The Metacognition of Answering and Revising during College Exams
ERIC Educational Resources Information Center
Couchman, Justin J.; Miller, Noelle E.; Zmuda, Shaun J.; Feather, Kathryn; Schwartzmeyer, Tina
2016-01-01
Students often gauge their performance before and after an exam, usually in the form of rough grade estimates or general feelings. Are these estimates accurate? Should they form the basis for decisions about study time, test-taking strategies, revisions, subject mastery, or even general competence? In two studies, undergraduates took a real…
Characterizing Sources of Uncertainty in Item Response Theory Scale Scores
ERIC Educational Resources Information Center
Yang, Ji Seung; Hansen, Mark; Cai, Li
2012-01-01
Traditional estimators of item response theory scale scores ignore uncertainty carried over from the item calibration process, which can lead to incorrect estimates of the standard errors of measurement (SEMs). Here, the authors review a variety of approaches that have been applied to this problem and compare them on the basis of their statistical…
Code of Federal Regulations, 2014 CFR
2014-04-01
... PROJECT COSTS Fees Under Section 30(e) of the Act § 4.305 Enforcement. (a) The Commission may take any... distributed to the agencies on a pro-rata basis except if an agency's cost statement is greater than its most recent estimate to the applicant under § 4.301(b), then the difference between the estimate and the cost...
Code of Federal Regulations, 2010 CFR
2010-04-01
... PROJECT COSTS Fees Under Section 30(e) of the Act § 4.305 Enforcement. (a) The Commission may take any... distributed to the agencies on a pro-rata basis except if an agency's cost statement is greater than its most recent estimate to the applicant under § 4.301(b), then the difference between the estimate and the cost...
Code of Federal Regulations, 2012 CFR
2012-04-01
... PROJECT COSTS Fees Under Section 30(e) of the Act § 4.305 Enforcement. (a) The Commission may take any... distributed to the agencies on a pro-rata basis except if an agency's cost statement is greater than its most recent estimate to the applicant under § 4.301(b), then the difference between the estimate and the cost...
Code of Federal Regulations, 2011 CFR
2011-04-01
... PROJECT COSTS Fees Under Section 30(e) of the Act § 4.305 Enforcement. (a) The Commission may take any... distributed to the agencies on a pro-rata basis except if an agency's cost statement is greater than its most recent estimate to the applicant under § 4.301(b), then the difference between the estimate and the cost...
Code of Federal Regulations, 2013 CFR
2013-04-01
... PROJECT COSTS Fees Under Section 30(e) of the Act § 4.305 Enforcement. (a) The Commission may take any... distributed to the agencies on a pro-rata basis except if an agency's cost statement is greater than its most recent estimate to the applicant under § 4.301(b), then the difference between the estimate and the cost...
12 CFR Appendix A to Subpart A of... - Appendix A to Subpart A of Part 327
Code of Federal Regulations, 2010 CFR
2010-01-01
... pricing multipliers are derived from: • A model (the Statistical Model) that estimates the probability..., which is four basis points higher than the minimum rate. II. The Statistical Model The Statistical Model... to 1997. As a result, and as described in Table A.1, the Statistical Model is estimated using a...
Joseph K. O. Amoah; Devendra M. Amatya; Soronnadi Nnaji
2012-01-01
Hydrologic models often require correct estimates of surface macro-depressional storage to accurately simulate rainfallârunoff processes. Traditionally, depression storage is determined through model calibration or lumped with soil storage components or on an ad hoc basis. This paper investigates a holistic approach for estimating surface depressional storage capacity...
ERIC Educational Resources Information Center
Meeks, Glenn E.; Fisher, Ricki; Loveless, Warren
Personnel involved in planning or developing schools lack the costing tools that will enable them to determine educational technology costs. This report presents an overview of the technology costing process and the general costs used in estimating educational technology systems on a macro-budget basis, along with simple cost estimates for…
Planning and Decision Making for Medical Education: An Analysis of Costs and Benefits.
ERIC Educational Resources Information Center
Wing, Paul
This paper clarifies the role of medical education in the large health care system, estimates the resources required to carry on medical education programs and the benefits that accrue from medical education, and answers a few fundamental policy questions. Cost estimates are developed on a program-by-program basis, using empirical economic…
Diagnostics of the power oil-filled transformer equipment of thermal power plants
NASA Astrophysics Data System (ADS)
Eltyshev, D. K.; Khoroshev, N. I.
2016-08-01
Problems concerning improvement of the diagnostics efficiency of the electrical facilities and functioning of the generation and distribution systems through the examples of the power oil-filled transformers, as the responsible elements referring to the electrical part of thermal power plants (TPP), were considered. Research activity is based on the fuzzy logic system allowing working both with statistical and expert information presented in the form of knowledge accumulated during operation of the power oil-filled transformer facilities. The diagnostic algorithm for various types of transformers, with the use of the intellectual estimation model of its thermal state on the basis of the key diagnostic parameters and fuzzy inference hierarchy, was developed. Criteria for taking measures allowing preventing emergencies in the electric power systems were developed. The fuzzy hierarchical model for the state assessment of the power oil-filled transformers of 110 kV, possessing high degree of credibility and setting quite strict requirements to the limits of variables of the equipment diagnostic parameters, was developed. The most frequent defects of the transformer standard elements, related with the disturbance of the isolation properties and instrumentation operation, were revealed after model testing on the real object. Presented results may be used both for the express diagnostics of the transformers state without disconnection from the power line and for more detailed analysis of the defects causes on the basis of the advanced list of the diagnostic parameters; information on those parameters may be received only after complete or partial disconnection.
The benefits of improved technologies in agricultural aviation
NASA Technical Reports Server (NTRS)
Lietzke, K.; Abram, P.; Braen, C.; Givens, S.; Hazelrigg, G. A., Jr.; Fish, R.; Clyne, F.; Sand, F.
1977-01-01
The results are present for a study of the economic benefits attributed to a variety of potential technological improvements in agricultural aviation. Part 1 gives a general description of the ag-air industry and discusses the information used in the data base to estimate the potential benefits from technological improvements. Part 2 presents the benefit estimates and provides a quantitative basis for the estimates in each area study. Part 3 is a bibliography of references relating to this study.
Optimum nonparametric estimation of population density based on ordered distances
Patil, S.A.; Kovner, J.L.; Burnham, Kenneth P.
1982-01-01
The asymptotic mean and error mean square are determined for the nonparametric estimator of plant density by distance sampling proposed by Patil, Burnham and Kovner (1979, Biometrics 35, 597-604. On the basis of these formulae, a bias-reduced version of this estimator is given, and its specific form is determined which gives minimum mean square error under varying assumptions about the true probability density function of the sampled data. Extension is given to line-transect sampling.
Mooney, Stephen J; Magee, Caroline; Dang, Kolena; Leonard, Julie C; Yang, Jingzhen; Rivara, Frederick P; Ebel, Beth E; Rowhani-Rahbar, Ali; Quistberg, D Alex
2018-05-14
"Complete Streets" policies require transportation engineers to make provisions for pedestrians, cyclists and transit users. These policies may make bicycling safer for individual cyclists while increasing overall bicycle fatalities if more individuals cycle due to improved infrastructure. We merged county-level records of Complete Streets policies with Fatality Analysis Reporting System counts of cyclist fatalities occurring between January 2000 and December 2015. Because comprehensive county cycling estimates were not available, we used bicycle commute estimates from the American Community Survey and US Census as a proxy for the cycling population, and limited analysis to 183 counties (accounting for over half the US population) for which cycle commute estimates were consistently non-zero. We used G-computation to estimate the effect of policies on overall cyclist fatalities while also accounting for potential policy effects on the size of the cycling population. Over 16 years, 5,254 cyclists died in these counties, representing 34 fatalities per 100,000 cyclist-years. We estimated that Complete Streets policies made cycling safer, averting 0.6 fatalities per 100,000 cyclist-years (95% CI: 0.3, 1.0) by encouraging a 2.4% increase in cycling and a 0.7% increase in cyclist fatalities. G-computation is a useful tool for understanding policy impact on risk and exposure.
Estimating the greenhouse gas benefits of forestry projects: A Costa Rican Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, Christopher; Sathaye, Jayant; Sanchez Azofeifa, G. Arturo
If the Clean Development Mechanism proposed under the Kyoto Protocol is to serve as an effective means for combating global climate change, it will depend upon reliable estimates of greenhouse gas benefits. This paper sketches the theoretical basis for estimating the greenhouse gas benefits of forestry projects and suggests lessons learned based on a case study of Costa Rica's Protected Areas Project, which is a 500,000 hectare effort to reduce deforestation and enhance reforestation. The Protected Areas Project in many senses advances the state of the art for Clean Development Mechanism-type forestry projects, as does the third-party verification work ofmore » SGS International Certification Services on the project. Nonetheless, sensitivity analysis shows that carbon benefit estimates for the project vary widely based on the imputed deforestation rate in the baseline scenario, e.g. the deforestation rate expected if the project were not implemented. This, along with a newly available national dataset that confirms other research showing a slower rate of deforestation in Costa Rica, suggests that the use of the 1979--1992 forest cover data originally as the basis for estimating carbon savings should be reconsidered. When the newly available data is substituted, carbon savings amount to 8.9 Mt (million tones) of carbon, down from the original estimate of 15.7 Mt. The primary general conclusion is that project developers should give more attention to the forecasting land use and land cover change scenarios underlying estimates of greenhouse gas benefits.« less
River runoff estimates based on remotely sensed surface velocities
NASA Astrophysics Data System (ADS)
Grünler, Steffen; Stammer, Detlef; Romeiser, Roland
2010-05-01
One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, will permit ATI measurements in an experimental mode. Based on numerical simulations, we present findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated. A sampling strategy for river runoff estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test site. High-resolution three-dimensional current fields in the Elbe river (Germany) from a numerical model are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. Addressing the problem of aliasing we removed tidal signals from the sampling data. Discharge estimates on the basis of measured surface current fields and river widths from TerraSAR-X are successfully simulated. The differences of the resulted net discharge estimate are between 30-55% for a required continuously observation period of one year. We discuss the applicability of the measuring strategies to a number of major rivers. Further we show results of runoff estimates by the retrieval of surface current fields by real TerraSAR-X ATI data (AS mode) for the Elbe river study area.
44 CFR 152.6 - Application review and award process.
Code of Federal Regulations, 2012 CFR
2012-10-01
... value of the proposed activities. We will provide the panelists the complete application content for... basis, and then analyze the type of fire department (paid, volunteer, or combination fire departments...
44 CFR 152.6 - Application review and award process.
Code of Federal Regulations, 2013 CFR
2013-10-01
... value of the proposed activities. We will provide the panelists the complete application content for... basis, and then analyze the type of fire department (paid, volunteer, or combination fire departments...
44 CFR 152.6 - Application review and award process.
Code of Federal Regulations, 2014 CFR
2014-10-01
... value of the proposed activities. We will provide the panelists the complete application content for... basis, and then analyze the type of fire department (paid, volunteer, or combination fire departments...
Steckling, Nadine; Devleesschauwer, Brecht; Winkelnkemper, Julia; Fischer, Florian; Ericson, Bret; Krämer, Alexander; Hornberg, Claudia; Fuller, Richard; Plass, Dietrich; Bose-O'Reilly, Stephan
2017-01-10
In artisanal small-scale gold mining, mercury is used for gold-extraction, putting miners and nearby residents at risk of chronic metallic mercury vapor intoxication (CMMVI). Burden of disease (BoD) analyses allow the estimation of the public health relevance of CMMVI, but until now there have been no specific CMMVI disability weights (DWs). The objective is to derive DWs for moderate and severe CMMVI. Disease-specific and generic health state descriptions of 18 diseases were used in a pairwise comparison survey. Mercury and BoD experts were invited to participate in an online survey. Data were analyzed using probit regression. Local regression was used to make the DWs comparable to the Global Burden of Disease (GBD) study. Alternative survey (visual analogue scale) and data analyses approaches (linear interpolation) were evaluated in scenario analyses. A total of 105 participants completed the questionnaire. DWs for moderate and severe CMMVI were 0.368 (0.261-0.484) and 0.588 (0.193-0.907), respectively. Scenario analyses resulted in higher mean values. The results are limited by the sample size, group of interviewees, questionnaire extent, and lack of generally accepted health state descriptions. DWs were derived to improve the data basis of mercury-related BoD estimates, providing useful information for policy-making. Integration of the results into the GBD DWs enhances comparability.
Moore, Michael D; Shi, Zhenqi; Wildfong, Peter L D
2010-12-01
To develop a method for drawing statistical inferences from differences between multiple experimental pair distribution function (PDF) transforms of powder X-ray diffraction (PXRD) data. The appropriate treatment of initial PXRD error estimates using traditional error propagation algorithms was tested using Monte Carlo simulations on amorphous ketoconazole. An amorphous felodipine:polyvinyl pyrrolidone:vinyl acetate (PVPva) physical mixture was prepared to define an error threshold. Co-solidified products of felodipine:PVPva and terfenadine:PVPva were prepared using a melt-quench method and subsequently analyzed using PXRD and PDF. Differential scanning calorimetry (DSC) was used as an additional characterization method. The appropriate manipulation of initial PXRD error estimates through the PDF transform were confirmed using the Monte Carlo simulations for amorphous ketoconazole. The felodipine:PVPva physical mixture PDF analysis determined ±3σ to be an appropriate error threshold. Using the PDF and error propagation principles, the felodipine:PVPva co-solidified product was determined to be completely miscible, and the terfenadine:PVPva co-solidified product, although having appearances of an amorphous molecular solid dispersion by DSC, was determined to be phase-separated. Statistically based inferences were successfully drawn from PDF transforms of PXRD patterns obtained from composite systems. The principles applied herein may be universally adapted to many different systems and provide a fundamentally sound basis for drawing structural conclusions from PDF studies.
Prospect evaluation of shallow I-35 reservoir of NE Malay Basin offshore, Terengganu, Malaysia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Janjua, Osama Akhtar, E-mail: janjua945@hotmail.com; Wahid, Ali, E-mail: ali.wahid@live.com; Salim, Ahmed Mohamed Ahmed, E-mail: mohamed.salim@petronas.com.my
2016-02-01
A potential accumulation of hydrocarbon that describes significant and conceivable drilling target is related to prospect. Possibility of success estimation, assuming discovery of hydrocarbons and the potential recoverable quantities range under a commercial development program are the basis of Prospect evaluation activities. The objective was to find the new shallow prospects in reservoir sandstone of I –Formation in Malay basin. The prospects in the study area are mostly consisting of faulted structures and stratigraphic channels. The methodology follows seismic interpretation and mapping, attribute analysis, evaluation of nearby well data i.e., based on well – log correlation. The petrophysical parameters analoguemore » to nearby wells was used as an input parameter for volumetric assessment. Based on analysis of presence and effectiveness, the prospect has a complete petroleum system. Two wells have been proposed to be drilled near the major fault and stratigraphic channel in I-35 reservoir that is O-1 and O-2 prospects respectively. The probability of geological success of prospect O-1 is at 35% while for O-2 is 24%. Finally, for hydrocarbon in place volumes were calculated which concluded the best estimate volume for oil in O-1 prospect is 4.99 MMSTB and O-2 prospect is 28.70 MMSTB while for gas is 29.27 BSCF and 25.59 BSCF respectively.« less
Steckling, Nadine; Devleesschauwer, Brecht; Winkelnkemper, Julia; Fischer, Florian; Ericson, Bret; Krämer, Alexander; Hornberg, Claudia; Fuller, Richard; Plass, Dietrich; Bose-O’Reilly, Stephan
2017-01-01
In artisanal small-scale gold mining, mercury is used for gold-extraction, putting miners and nearby residents at risk of chronic metallic mercury vapor intoxication (CMMVI). Burden of disease (BoD) analyses allow the estimation of the public health relevance of CMMVI, but until now there have been no specific CMMVI disability weights (DWs). The objective is to derive DWs for moderate and severe CMMVI. Disease-specific and generic health state descriptions of 18 diseases were used in a pairwise comparison survey. Mercury and BoD experts were invited to participate in an online survey. Data were analyzed using probit regression. Local regression was used to make the DWs comparable to the Global Burden of Disease (GBD) study. Alternative survey (visual analogue scale) and data analyses approaches (linear interpolation) were evaluated in scenario analyses. A total of 105 participants completed the questionnaire. DWs for moderate and severe CMMVI were 0.368 (0.261–0.484) and 0.588 (0.193–0.907), respectively. Scenario analyses resulted in higher mean values. The results are limited by the sample size, group of interviewees, questionnaire extent, and lack of generally accepted health state descriptions. DWs were derived to improve the data basis of mercury-related BoD estimates, providing useful information for policy-making. Integration of the results into the GBD DWs enhances comparability. PMID:28075395
Thermodynamic properties of gaseous ruthenium species.
Miradji, Faoulat; Souvi, Sidi; Cantrel, Laurent; Louis, Florent; Vallet, Valérie
2015-05-21
The review of thermodynamic data of ruthenium oxides reveals large uncertainties in some of the standard enthalpies of formation, motivating the use of high-level relativistic correlated quantum chemical methods to reduce the level of discrepancies. The reaction energies leading to the formation of ruthenium oxides RuO, RuO2, RuO3, and RuO4 have been calculated for a series of reactions. The combination of different quantum chemical methods has been investigated [DFT, CASSCF, MRCI, CASPT2, CCSD(T)] in order to predict the geometrical parameters, the energetics including electronic correlation and spin-orbit coupling. The most suitable method for ruthenium compounds is the use of TPSSh-5%HF for geometry optimization, followed by CCSD(T) with complete basis set (CBS) extrapolations for the calculation of the total electronic energies. SO-CASSCF seems to be accurate enough to estimate spin-orbit coupling contributions to the ground-state electronic energies. This methodology yields very accurate standard enthalpies of formations of all species, which are either in excellent agreement with the most reliable experimental data or provide an improved estimate for the others. These new data will be implemented in the thermodynamical databases that are used by the ASTEC code (accident source term evaluation code) to build models of ruthenium chemistry behavior in severe nuclear accident conditions. The paper also discusses the nature of the chemical bonds both from molecular orbital and topological view points.
Geist, Eric L.
2014-01-01
Temporal clustering of tsunami sources is examined in terms of a branching process model. It previously was observed that there are more short interevent times between consecutive tsunami sources than expected from a stationary Poisson process. The epidemic‐type aftershock sequence (ETAS) branching process model is fitted to tsunami catalog events, using the earthquake magnitude of the causative event from the Centennial and Global Centroid Moment Tensor (CMT) catalogs and tsunami sizes above a completeness level as a mark to indicate that a tsunami was generated. The ETAS parameters are estimated using the maximum‐likelihood method. The interevent distribution associated with the ETAS model provides a better fit to the data than the Poisson model or other temporal clustering models. When tsunamigenic conditions (magnitude threshold, submarine location, dip‐slip mechanism) are applied to the Global CMT catalog, ETAS parameters are obtained that are consistent with those estimated from the tsunami catalog. In particular, the dip‐slip condition appears to result in a near zero magnitude effect for triggered tsunami sources. The overall consistency between results from the tsunami catalog and that from the earthquake catalog under tsunamigenic conditions indicates that ETAS models based on seismicity can provide the structure for understanding patterns of tsunami source occurrence. The fractional rate of triggered tsunami sources on a global basis is approximately 14%.